0% found this document useful (0 votes)
460 views388 pages

INFORMATICA

GUÍA DE INFORMÁTICA EN INGLES
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
460 views388 pages

INFORMATICA

GUÍA DE INFORMÁTICA EN INGLES
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Informatica®

10.2

Release Guide
Informatica Release Guide
10.2
September 2017
© Copyright Informatica LLC 2003, 2018

This software and documentation are provided only under a separate license agreement containing restrictions on use and disclosure. No part of this document may be
reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica LLC.

Informatica, the Informatica logo, PowerCenter, PowerExchange, Big Data Management and Live Data Map are trademarks or registered trademarks of Informatica LLC
in the United States and many jurisdictions throughout the world. A current list of Informatica trademarks is available on the web at [Link]
[Link]. Other company and product names may be trade names or trademarks of their respective owners.

Portions of this software and/or documentation are subject to copyright held by third parties, including without limitation: Copyright DataDirect Technologies. All rights
reserved. Copyright © Sun Microsystems. All rights reserved. Copyright © RSA Security Inc. All Rights Reserved. Copyright © Ordinal Technology Corp. All rights
reserved. Copyright © Aandacht c.v. All rights reserved. Copyright Genivia, Inc. All rights reserved. Copyright Isomorphic Software. All rights reserved. Copyright © Meta
Integration Technology, Inc. All rights reserved. Copyright © Intalio. All rights reserved. Copyright © Oracle. All rights reserved. Copyright © Adobe Systems Incorporated.
All rights reserved. Copyright © DataArt, Inc. All rights reserved. Copyright © ComponentSource. All rights reserved. Copyright © Microsoft Corporation. All rights
reserved. Copyright © Rogue Wave Software, Inc. All rights reserved. Copyright © Teradata Corporation. All rights reserved. Copyright © Yahoo! Inc. All rights reserved.
Copyright © Glyph & Cog, LLC. All rights reserved. Copyright © Thinkmap, Inc. All rights reserved. Copyright © Clearpace Software Limited. All rights reserved. Copyright
© Information Builders, Inc. All rights reserved. Copyright © OSS Nokalva, Inc. All rights reserved. Copyright Edifecs, Inc. All rights reserved. Copyright Cleo
Communications, Inc. All rights reserved. Copyright © International Organization for Standardization 1986. All rights reserved. Copyright © ej-technologies GmbH. All
rights reserved. Copyright © Jaspersoft Corporation. All rights reserved. Copyright © International Business Machines Corporation. All rights reserved. Copyright ©
yWorks GmbH. All rights reserved. Copyright © Lucent Technologies. All rights reserved. Copyright © University of Toronto. All rights reserved. Copyright © Daniel
Veillard. All rights reserved. Copyright © Unicode, Inc. Copyright IBM Corp. All rights reserved. Copyright © MicroQuill Software Publishing, Inc. All rights reserved.
Copyright © PassMark Software Pty Ltd. All rights reserved. Copyright © LogiXML, Inc. All rights reserved. Copyright © 2003-2010 Lorenzi Davide, All rights reserved.
Copyright © Red Hat, Inc. All rights reserved. Copyright © The Board of Trustees of the Leland Stanford Junior University. All rights reserved. Copyright © EMC
Corporation. All rights reserved. Copyright © Flexera Software. All rights reserved. Copyright © Jinfonet Software. All rights reserved. Copyright © Apple Inc. All rights
reserved. Copyright © Telerik Inc. All rights reserved. Copyright © BEA Systems. All rights reserved. Copyright © PDFlib GmbH. All rights reserved. Copyright ©
Orientation in Objects GmbH. All rights reserved. Copyright © Tanuki Software, Ltd. All rights reserved. Copyright © Ricebridge. All rights reserved. Copyright © Sencha,
Inc. All rights reserved. Copyright © Scalable Systems, Inc. All rights reserved. Copyright © jQWidgets. All rights reserved. Copyright © Tableau Software, Inc. All rights
reserved. Copyright© MaxMind, Inc. All Rights Reserved. Copyright © TMate Software s.r.o. All rights reserved. Copyright © MapR Technologies Inc. All rights reserved.
Copyright © Amazon Corporate LLC. All rights reserved. Copyright © Highsoft. All rights reserved. Copyright © Python Software Foundation. All rights reserved.
Copyright © [Link]. All rights reserved. Copyright © CNRI. All rights reserved.

This product includes software developed by the Apache Software Foundation ([Link] and/or other software which is licensed under various
versions of the Apache License (the "License"). You may obtain a copy of these Licenses at [Link] Unless required by applicable law or
agreed to in writing, software distributed under these Licenses is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
or implied. See the Licenses for the specific language governing permissions and limitations under the Licenses.

This product includes software which was developed by Mozilla ([Link] software copyright The JBoss Group, LLC, all rights reserved; software
copyright © 1999-2006 by Bruno Lowagie and Paulo Soares and other software which is licensed under various versions of the GNU Lesser General Public License
Agreement, which may be found at http:// [Link]/licenses/[Link]. The materials are provided free of charge by Informatica, "as-is", without warranty of any
kind, either express or implied, including but not limited to the implied warranties of merchantability and fitness for a particular purpose.

The product includes ACE(TM) and TAO(TM) software copyrighted by Douglas C. Schmidt and his research group at Washington University, University of California,
Irvine, and Vanderbilt University, Copyright (©) 1993-2006, all rights reserved.

This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit (copyright The OpenSSL Project. All Rights Reserved) and
redistribution of this software is subject to terms available at [Link] and [Link]

This product includes Curl software which is Copyright 1996-2013, Daniel Stenberg, <daniel@[Link]>. All Rights Reserved. Permissions and limitations regarding this
software are subject to terms available at [Link] Permission to use, copy, modify, and distribute this software for any purpose with or
without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.

The product includes software copyright 2001-2005 (©) MetaStuff, Ltd. All Rights Reserved. Permissions and limitations regarding this software are subject to terms
available at [Link] [Link].

The product includes software copyright © 2004-2007, The Dojo Foundation. All Rights Reserved. Permissions and limitations regarding this software are subject to
terms available at [Link]

This product includes ICU software which is copyright International Business Machines Corporation and others. All rights reserved. Permissions and limitations
regarding this software are subject to terms available at [Link]

This product includes software copyright © 1996-2006 Per Bothner. All rights reserved. Your right to use such materials is set forth in the license which may be found at
http:// [Link]/software/ kawa/[Link].

This product includes OSSP UUID software which is Copyright © 2002 Ralf S. Engelschall, Copyright © 2002 The OSSP Project Copyright © 2002 Cable & Wireless
Deutschland. Permissions and limitations regarding this software are subject to terms available at [Link]

This product includes software developed by Boost ([Link] or under the Boost software license. Permissions and limitations regarding this software
are subject to terms available at http:/ /[Link]/LICENSE_1_0.txt.

This product includes software copyright © 1997-2007 University of Cambridge. Permissions and limitations regarding this software are subject to terms available at
http:// [Link]/[Link].

This product includes software copyright © 2007 The Eclipse Foundation. All Rights Reserved. Permissions and limitations regarding this software are subject to terms
available at http:// [Link]/org/documents/[Link] and at [Link]

This product includes software licensed under the terms at [Link] [Link] http://
[Link]/doc/ [Link], [Link] [Link] [Link] http://
[Link]/doc/ [Link], [Link] , [Link] [Link]
release/[Link], [Link] [Link] [Link] [Link]
license-agreements/fuse-message-broker-v-5-3- license-agreement; [Link] [Link] [Link]
[Link]; [Link] [Link] [Link] . [Link]
Consortium/Legal/2002/copyright-software-20021231; [Link] [Link] [Link]
[Link]; [Link] [Link] [Link] [Link]
software/tcltk/[Link], [Link] [Link] [Link] [Link]
iodbc/wiki/iODBC/License; [Link] [Link] [Link]
[Link]; [Link] [Link] [Link] [Link]
[Link] [Link] [Link] [Link] [Link]
EaselJS/blob/master/src/easeljs/display/[Link]; [Link] [Link] http://
[Link]/[Link]; [Link] [Link]
LICENSE; [Link] [Link] [Link]
master/LICENSE; [Link] [Link] [Link]
LICENSE; [Link] [Link] [Link]
[Link]/[Link]; [Link] [Link]
[Link]; [Link] [Link] [Link]
[Link]; [Link] and [Link]

This product includes software licensed under the Academic Free License ([Link] the Common Development and
Distribution License ([Link] the Common Public License ([Link] the Sun Binary
Code License Agreement Supplemental License Terms, the BSD License (http:// [Link]/licenses/[Link]), the new BSD License (http://
[Link]/licenses/BSD-3-Clause), the MIT License ([Link] the Artistic License ([Link]
licenses/artistic-license-1.0) and the Initial Developer’s Public License Version 1.0 ([Link]

This product includes software copyright © 2003-2006 Joe WaInes, 2006-2007 XStream Committers. All rights reserved. Permissions and limitations regarding this
software are subject to terms available at [Link] This product includes software developed by the Indiana University Extreme! Lab.
For further information please visit [Link]

This product includes software Copyright (c) 2013 Frank Balluffi and Markus Moeller. All rights reserved. Permissions and limitations regarding this software are subject
to terms of the MIT license.

See patents at [Link]

DISCLAIMER: Informatica LLC provides this documentation "as is" without warranty of any kind, either express or implied, including, but not limited to, the implied
warranties of noninfringement, merchantability, or use for a particular purpose. Informatica LLC does not warrant that this software or documentation is error free. The
information provided in this software or documentation may include technical inaccuracies or typographical errors. The information in this software and documentation
is subject to change at any time without notice.

NOTICES

This Informatica product (the "Software") includes certain drivers (the "DataDirect Drivers") from DataDirect Technologies, an operating company of Progress Software
Corporation ("DataDirect") which are subject to the following terms and conditions:

1. THE DATADIRECT DRIVERS ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.
2. IN NO EVENT WILL DATADIRECT OR ITS THIRD PARTY SUPPLIERS BE LIABLE TO THE END-USER CUSTOMER FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, CONSEQUENTIAL OR OTHER DAMAGES ARISING OUT OF THE USE OF THE ODBC DRIVERS, WHETHER OR NOT INFORMED OF THE POSSIBILITIES
OF DAMAGES IN ADVANCE. THESE LIMITATIONS APPLY TO ALL CAUSES OF ACTION, INCLUDING, WITHOUT LIMITATION, BREACH OF CONTRACT, BREACH
OF WARRANTY, NEGLIGENCE, STRICT LIABILITY, MISREPRESENTATION AND OTHER TORTS.

The information in this documentation is subject to change without notice. If you find any problems in this documentation, report them to us at
infa_documentation@[Link].

Informatica products are warranted according to the terms and conditions of the agreements under which they are provided. INFORMATICA PROVIDES THE
INFORMATION IN THIS DOCUMENT "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT.

Publication Date: 2018-01-09


Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Informatica Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Informatica Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Informatica Knowledge Base. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Informatica Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Informatica Product Availability Matrixes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Informatica Velocity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Informatica Marketplace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Informatica Global Customer Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

Part I: 10.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Chapter 1: New Products (10.2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26


PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

Chapter 2: New Features (10.2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27


Application Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Model Repository Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Big Data Management Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Cluster Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Processing Hierarchical Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Stateful Computing on the Spark Engine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Data Integration Service Queuing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Blaze Job Monitor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Data Integration Service Properties for Hadoop Integration. . . . . . . . . . . . . . . . . . . . . . . . 30
Sqoop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Autoscaling in an Amazon EMR Cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Transformation Support on the Blaze Engine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Hive Functionality for the Blaze Engine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Transformation Support on the Spark Engine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Hive Functionality for the Spark Engine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
infacmd cluster Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
infacmd dis Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
infacmd ipc Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
infacmd isp Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
infacmd mrs Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
infacmd ms Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4 Table of Contents
infacmd wfs Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
infasetup Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
pmrep Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Data Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Informatica Data Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Enterprise Information Catalog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
New Data Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Custom Scanner Framework. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
REST APIs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Composite Data Domains. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Data Domains. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Export and Import of Custom Attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Rich Text as Custom Attribute Value. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Transformation Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Unstructured File Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Value Frequency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Deployment Support for Azure HDInsight. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Informatica Analyst. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Intelligent Data Lake. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Validate and Assess Data Using Visualization with Apache Zeppelin. . . . . . . . . . . . . . . . . . 45
Assess Data Using Filters During Data Preview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Enhanced Layout of Recipe Panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Apply Data Quality Rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
View Business Terms for Data Assets in Data Preview and Worksheet View. . . . . . . . . . . . . 46
Prepare Data for Delimited Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Edit Joins in a Joined Worksheet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Edit Sampling Settings for Data Preparation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Support for Multiple Enterprise Information Catalog Resources in the Data Lake. . . . . . . . . . 46
Use Oracle for the Data Preparation Service Repository. . . . . . . . . . . . . . . . . . . . . . . . . . 46
Improved Scalability for the Data Preparation Service. . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Nonrelational Data Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Informatica Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Informatica Upgrade Advisor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Intelligent Streaming. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
CSV Format. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Data Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Pass-Through Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

Table of Contents 5
Sources and Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Transformation Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Cloudera Navigator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
PowerExchange Adapters for PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Rule Specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
User Activity Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Transformation Language. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Informatica Transformation Language. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
PowerCenter Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Informatica Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Chapter 3: Changes (10.2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62


Support Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Big Data Hadoop Distribution Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Application Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Content Management Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Data Integration Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Big Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Hadoop Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
HBase Connection Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Hive Connection Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
HBase Connection Properties for MapR-DB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Mapping Run-time Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
S3 Access and Secret Key Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Sqoop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Enterprise Information Catalog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Product Name Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Informatica Analyst. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Intelligent Streaming. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Kafka Data Object Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

6 Table of Contents
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
PowerExchange Adapters for PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
SAML Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Informatica Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

Chapter 4: Release Tasks (10.2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78


PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
PowerExchange Adapters for PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

Part II: Version 10.1.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

Chapter 5: New Features, Changes, and Release Tasks (10.1.1 HotFix 1). . . . 82
New Products (10.1.1 HotFix 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
PowerExchange for Cloud Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
New Features (10.1.1 HotFix 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Informatica Analyst. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Changes (10.1.1 HotFix 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Support Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

Chapter 6: New Features, Changes, and Release Tasks (10.1.1 Update 2). . . . 87
New Products (10.1.1 Update 2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
PowerExchange for MapR-DB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
New Features (10.1.1 Update 2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Big Data Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Enterprise Information Catalog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Intelligent Data Lake. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Changes (10.1.1 Update 2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Support Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Big Data Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Enterprise Information Catalog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

Chapter 7: New Features, Changes, and Release Tasks (10.1.1 Update 1). . . . 94
New Features (10.1.1 Update 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Big Data Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

Table of Contents 7
Changes (10.1.1 Update 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Release Tasks (10.1.1 Update 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

Chapter 8: New Products (10.1.1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96


Intelligent Streaming. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

Chapter 9: New Features (10.1.1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98


Application Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Analyst Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Big Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Blaze Engine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Installation and Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Spark Engine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Sqoop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Business Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Export Rich Text as Plain Text. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Include Rich Text Content for Conflicting Assets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
infacmd as Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
infacmd dis command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
infacmd mrs command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
pmrep Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Enterprise Information Catalog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Business Glossary Integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Column Similarity Profiling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Data Domains and Data Domain Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Lineage and Impact Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Permissions for Users and User Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
New Resource Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Synonym Definition Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Universal Connectivity Framework. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Informatica Analyst. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Informatica Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Informatica Upgrade Advisor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Intelligent Data Lake. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Data Preview for Tables in External Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Importing Data From Tables in External Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

8 Table of Contents
Exporting Data to External Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Configuring Sampling Criteria for Data Preparation. . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Performing a Lookup on Worksheets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Downloading as a TDE File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Sentry and Ranger Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Informatica Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Dataset Extraction for Cloudera Navigator Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Mapping Extraction for Informatica Platform Resources. . . . . . . . . . . . . . . . . . . . . . . . . 110
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
PowerExchange® Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
PowerExchange Adapters for PowerCenter®. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Custom Kerberos Libraries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Scheduler Service Support in Kerberos-Enabled Domains. . . . . . . . . . . . . . . . . . . . . . . . 113
Single Sign-on for Informatica Web Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Web Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Informatica Web Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Informatica Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

Chapter 10: Changes (10.1.1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120


Support Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Big Data Management Hive Engine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Support Changes - Big Data Management Hadoop Distributions. . . . . . . . . . . . . . . . . . . . 121
Big Data Management Spark Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Data Analyzer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Operating System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
PowerExchange for SAP NetWeaver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Reporting and Dashboards Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Reporting Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Big Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Functions Supported in the Hadoop Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Hadoop Configuration Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Business Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Export File Restriction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Data Integration Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Data Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Informatica Data Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Informatica Analyst. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

Table of Contents 9
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Informatica Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Enterprise information Catalog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
HDFS Scanner Enhancement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Relationships View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Cloudera Navigator Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Netezza Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
PowerExchange Adapters for Informatica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
PowerExchange Adapters for PowerCenter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
InformaticaTransformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Informatica Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Metadata Manager Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
PowerExchange for SAP NetWeaver Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . 130

Chapter 11: Release Tasks (10.1.1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131


Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Business Intelligence Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Cloudera Navigator Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Tableau Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

Part III: Version 10.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

Chapter 12: New Products (10.1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134


Intelligent Data Lake. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

Chapter 13: New Features (10.1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138


Application Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
System Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Big Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Hadoop Ecosystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Hadoop Security Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Spark Runtime Engine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Sqoop Connectivity for Relational Sources and Targets. . . . . . . . . . . . . . . . . . . . . . . . . 140

10 Table of Contents
Transformation Support on the Blaze Engine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Business Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Inherit Glossary Content Managers to All Assets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Bi-directional Custom Relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Custom Colors in the Relationship View Diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Schema Names in IBM DB2 Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Command Line Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Exception Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Informatica Administrator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Domain View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Informatica Analyst. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Generate Source File Name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Import from PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Copy Text Between Excel and the Developer Tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Logical Data Object Read and Write Mapping Editing. . . . . . . . . . . . . . . . . . . . . . . . . . . 151
DDL Query. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Informatica Development Platform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Live Data Map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Email Notifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Keyword Search. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Profiling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Scanners. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Informatica Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Universal Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Incremental Loading for Oracle and Teradata Resources. . . . . . . . . . . . . . . . . . . . . . . . . 155
Hiding Resources in the Summary View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Creating an SQL Server Integration Services Resource from Multiple Package Files. . . . . . . . 155
Metadata Manager Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Application Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Migrate Business Glossary Audit Trail History and Links to Technical Metadata. . . . . . . . . . 156
PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
PowerExchange Adapters for PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

Table of Contents 11
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
PowerCenter Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

Chapter 14: Changes (10.1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162


Support Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Application Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
System Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Big Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Business Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Custom Relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Bi-Directional Default Relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Governed By Relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Glossary Workspace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Business Glossary Desktop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Kerberos Authentication for Business Glossary Command Program. . . . . . . . . . . . . . . . . 165
Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Exception Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Live Data Map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Enterprise Information Catalog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Live Data Map Administrator Home Page. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Microsoft SQL Server Integration Services Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Certificate Validation for Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Informatica Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

Chapter 15: Release Tasks (10.1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171


Metadata Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Informatica Platform Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Verify the Truststore File for Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . 171
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Permissions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

12 Table of Contents
Part IV: Version 10.0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

Chapter 16: New Products (10.0). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174


PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

Chapter 17: New Features (10.0). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176


Application Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Disabling and Recycling Application Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Data Integration Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Model Repository Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
System Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Big Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Big Data Management Configuration Utility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Hadoop Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Hadoop Ecosystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Parameters for Big Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Run-Time and Validation Environments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Business Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Approval Workflow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Glossary Asset Attachments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Long String Data Type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Support for Rich Text. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Import and Export Enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Email Notifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Relationship View Diagram Enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Analyst Tool Privileges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Business Term Links. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Glossary Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Asset View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Default Approvers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
PowerCenter Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Connection Switching. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Data Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Informatica Data Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Informatica Administrator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Manage Tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197

Table of Contents 13
Dependency Graph. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Informatica Analyst. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Asset Versioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Generate and Execute DDL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Generate Relational and Flat File Metadata at Run Time. . . . . . . . . . . . . . . . . . . . . . . . . 202
Import from PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Monitoring Tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Object Versioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
Physical Data Objects in an Application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Informatica Development Platform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Informatica Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Tableau Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Data Lineage Enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Metadata Catalog Views. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Impala Queries in Cloudera Navigator Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Parameters in Informatica Platform Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Recent History. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Related Catalog Objects and Impact Summary Filter and Sort. . . . . . . . . . . . . . . . . . . . . 214
Session Task Instances in the Impact Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Application and Data Lineage Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Reference Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Rule Specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Privileges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Roles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Transformation Language Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Informatica Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Informatica Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225

14 Table of Contents
Chapter 18: Changes (10.0). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Changed Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Application Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Analyst Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Data Integration Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Model Repository Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
SAP BW Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Big Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Business Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Relationship View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Asset Phase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Library Workspace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Import and Export. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Informatica Administrator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Domain tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Informatica Analyst. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Scorecards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Application Deployment Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Flat File Data Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Microsoft SQL Server Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Logical Data Object Editing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Pushdown Optimization for ODBC Sources and Targets. . . . . . . . . . . . . . . . . . . . . . . . . 242
Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Parameter Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Partitioned Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Pushdown Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Run-time Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
ODBC Connectivity for Informix Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
ODBC Connectivity for Microsoft SQL Server Resources. . . . . . . . . . . . . . . . . . . . . . . . . 244
Impact Summary for PowerCenter Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Maximum Concurrent Resource Loads. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Search. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Metadata Manager Log File Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246

Table of Contents 15
Business Glossary Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Profiling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Informix Native Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
pmrep Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
PowerCenter Data Profiling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
PowerExchange Adapters for Informatica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Reference Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Rule Specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Sources and Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Sources and Targets in PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Informatica Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254

Chapter 19: Release Tasks (10.0). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258


Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Parameter Precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258

Part V: Version 9.6.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259

Chapter 20: New Features, Changes, and Release Tasks (9.6.1 HotFix 4). . . 260
New Features (9.6.1 HotFix 4). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
Exception Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Informatica Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Changes (9.6.1 HotFix 4). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Change to Support in Version 9.6.1 HotFix 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Application Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Informatica Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Informatica Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Changes to Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268

16 Table of Contents
Release Tasks (9.6.1 HotFix 4). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268

Chapter 21: New Features, Changes, and Release Tasks (9.6.1 HotFix 3). . . 270
New Features (9.6.1 HotFix 3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Business Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Informatica Data Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
PowerExchange Adapters for PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Changes (9.6.1 HotFix 3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Business Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Release Tasks (9.6.1 HotFix 3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277

Chapter 22: New Features, Changes, and Release Tasks (9.6.1 HotFix 2). . . 279
New Features (9.6.1 HotFix 2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Big Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Business Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Data Quality Accelerators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Informatica Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
PowerExchange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Changes (9.6.1 HotFix 2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Informatica Analyst. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Release Tasks (9.6.1 HotFix 2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295

Chapter 23: New Features, Changes, and Release Tasks (9.6.1 HotFix 1). . . 296
New Features (9.6.1 HotFix 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296

Table of Contents 17
Big Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Business Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Command Line Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Data Quality Accelerators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Informatica Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Reference Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Rule Specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Changes (9.6.1 HotFix 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Application Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Business Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
PowerCenter Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
PowerExchange. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
Reference Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
Release Tasks (9.6.1 HotFix 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Informatica Web Client Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308

Chapter 24: New Features (9.6.1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309


Application Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Content Management Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Big Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
Business Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Informatica Administrator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
Informatica Development Platform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Address Validator Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Data Processor Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Match Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
SQL Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320

18 Table of Contents
Installer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Informatica Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
PowerExchange. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Listener Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
infacmd pwx Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Informatica Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
PowerCenter Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
Profiles and Scorecards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Reference Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
Rule Specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
Sources and Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
Informatica Sources and Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
PowerCenter Sources and Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Transformation Language Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Informatica Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327

Chapter 25: Changes (9.6.1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328


Big Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
Address Validator Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
Data Masking Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
Data Processor Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
Informatica Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
PowerCenter Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Data Masking Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
PowerExchange Adapters for PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Profiles and Scorecards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
Rule Specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333

Part VI: Version 9.6.0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335

Chapter 26: New Features and Enhancements (9.6.0). . . . . . . . . . . . . . . . . . . 336


Version 9.6.0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
Informatica Analyst. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
Informatica Installer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337

Table of Contents 19
Informatica Data Explorer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
Informatica Data Quality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Informatica Data Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Informatica Data Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Informatica Development Platform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Informatica Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Informatica Domain Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
PowerCenter Big Data Edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
PowerCenter Advanced Edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
PowerExchange Adapters for PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Informatica Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360

Chapter 27: Changes to Informatica Data Explorer (9.6.0). . . . . . . . . . . . . . . 363


Enterprise Discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
Profile Results Verification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
Rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
Scorecards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364

Chapter 28: Changes to Informatica Data Quality (9.6.0). . . . . . . . . . . . . . . . 365


Address Validator Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Exception Record Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Informatica Data Director for Data Quality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Java Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Mapping Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Match Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Native Connectivity to Microsoft SQL Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Port-to-Port Data Conversion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Profile Results Verification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Reference Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
Rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
Scorecards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368

Chapter 29: Changes to Informatica Data Services (9.6.0). . . . . . . . . . . . . . . 369


Java Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Native Connectivity to Microsoft SQL Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Port-to-Port Data Conversion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
Profile Results Verification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
Rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370

20 Table of Contents
Scorecards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370

Chapter 30: Changes to Informatica Data Transformation (9.6.0). . . . . . . . . 372


Export Mapping to PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
Invalid CMConfig File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372

Chapter 31: Changes to Informatica Domain (9.6.0). . . . . . . . . . . . . . . . . . . . 373


Informatica Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
Analyst Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
Content Management Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
Data Integration Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
Data Director Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
Test Data Manager Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
Model Repository Service Privileges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
Domain Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
Changes to Supported Platforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375

Chapter 32: Changes to PowerCenter (9.6.0). . . . . . . . . . . . . . . . . . . . . . . . . . 376


Native Connectivity to Microsoft SQL Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Pushdown Optimization for ODBC Sources and Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Repository Connection File Default Location. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Repository Connection File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
Umask Configuration for Operating System Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377

Chapter 33: Changes to PowerCenter Big Data Edition (9.6.0). . . . . . . . . . . . 378


Hadoop Environment Properties File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
Mappings in the Native Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378

Chapter 34: Changes to Metadata Manager (9.6.0). . . . . . . . . . . . . . . . . . . . . 379


Browser Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Metadata Manager Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Metadata Manager Business Glossaries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
Metadata Manager Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
mmcmd Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
Native Connectivity to Microsoft SQL Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Password Modification for Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382

Chapter 35: Changes to Adapters for PowerCenter (9.6.0). . . . . . . . . . . . . . . 383


PowerExchange for Facebook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
PowerExchange for Hadoop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
PowerExchange for LinkedIn. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
PowerExchange for Microsoft Dynamics CRM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
PowerExchange for SAP NetWeaver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384

Table of Contents 21
PowerExchange for Twitter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
PowerExchange for Web Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386

Chapter 36: Changes to Adapters for Informatica (9.6.0). . . . . . . . . . . . . . . . 387


PowerExchange for DataSift. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
PowerExchange for Facebook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
PowerExchange for LinkedIn. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
PowerExchange for Salesforce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
PowerExchange for SAP NetWeaver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
PowerExchange for Twitter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
PowerExchange for Web Content-Kapow Katalyst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388

22 Table of Contents
Preface
The Informatica Release Guide lists new features and enhancements, behavior changes between versions,
and tasks you might need to perform after you upgrade from a previous version. The Informatica Release
Guide is written for all types of users who are interested in the new features and changed behavior. This
guide assumes that you have knowledge of the features for which you are responsible.

Informatica Resources

Informatica Network
Informatica Network hosts Informatica Global Customer Support, the Informatica Knowledge Base, and other
product resources. To access Informatica Network, visit [Link]

As a member, you can:

• Access all of your Informatica resources in one place.


• Search the Knowledge Base for product resources, including documentation, FAQs, and best practices.
• View product availability information.
• Review your support cases.
• Find your local Informatica User Group Network and collaborate with your peers.

Informatica Knowledge Base


Use the Informatica Knowledge Base to search Informatica Network for product resources such as
documentation, how-to articles, best practices, and PAMs.

To access the Knowledge Base, visit [Link] If you have questions, comments, or ideas
about the Knowledge Base, contact the Informatica Knowledge Base team at
KB_Feedback@[Link].

Informatica Documentation
To get the latest documentation for your product, browse the Informatica Knowledge Base at
[Link]

If you have questions, comments, or ideas about this documentation, contact the Informatica Documentation
team through email at infa_documentation@[Link].

23
Informatica Product Availability Matrixes
Product Availability Matrixes (PAMs) indicate the versions of operating systems, databases, and other types
of data sources and targets that a product release supports. If you are an Informatica Network member, you
can access PAMs at
[Link]

Informatica Velocity
Informatica Velocity is a collection of tips and best practices developed by Informatica Professional
Services. Developed from the real-world experience of hundreds of data management projects, Informatica
Velocity represents the collective knowledge of our consultants who have worked with organizations from
around the world to plan, develop, deploy, and maintain successful data management solutions.

If you are an Informatica Network member, you can access Informatica Velocity resources at
[Link]

If you have questions, comments, or ideas about Informatica Velocity, contact Informatica Professional
Services at ips@[Link].

Informatica Marketplace
The Informatica Marketplace is a forum where you can find solutions that augment, extend, or enhance your
Informatica implementations. By leveraging any of the hundreds of solutions from Informatica developers
and partners, you can improve your productivity and speed up time to implementation on your projects. You
can access Informatica Marketplace at [Link]

Informatica Global Customer Support


You can contact a Global Support Center by telephone or through Online Support on Informatica Network.

To find your local Informatica Global Customer Support telephone number, visit the Informatica website at
the following link:
[Link]

If you are an Informatica Network member, you can use Online Support at [Link]

24 Preface
Part I: 10.2
This part contains the following chapters:

• New Products (10.2), 26


• New Features (10.2), 27
• Changes (10.2), 62
• Release Tasks (10.2), 78

25
Chapter 1

New Products (10.2)


This chapter includes the following topic:

• PowerExchange Adapters, 26

PowerExchange Adapters

PowerExchange Adapters for Informatica


This section describes new Informatica adapters in 10.2.

PowerExchange for Microsoft Azure Data Lake Store


Effective in version 10.2, you can create a Microsoft Azure Data Lake Store connection to specify the location
of Microsoft Azure Data Lake Store sources and targets you want to include in a data object. You can use the
Microsoft Azure Data Lake Store connection in data object read and write operations. You can validate and
run mappings in the native environment or on the Blaze engine in the Hadoop environment.

For more information, see the Informatica PowerExchange for Microsoft Azure Data Lake Store User Guide.

26
Chapter 2

New Features (10.2)


This chapter includes the following topics:

• Application Services, 27
• Big Data , 28
• Command Line Programs, 31
• Data Types, 39
• Documentation, 40
• Enterprise Information Catalog, 41
• Informatica Analyst, 44
• Intelligent Data Lake, 45
• Informatica Developer, 47
• Informatica Installation, 47
• Intelligent Streaming, 47
• Metadata Manager, 49
• PowerCenter, 49
• PowerExchange Adapters, 50
• Rule Specifications, 54
• Security, 54
• Transformation Language, 55
• Transformations, 56
• Workflows, 60

Application Services
This section describes new application service features in 10.2.

Model Repository Service


This section describes new Model Repository Service features in 10.2.

27
Import Objects from Previous Versions
Effective in version 10.2, you can use infacmd to upgrade objects exported from an Informatica 10.1 or
10.1.1 Model repository to the current metadata format, and then import the upgraded objects into the
current Informatica release.

For more information, see the "Object Import and Export" chapter in the Informatica 10.2 Developer Tool
Guide, or the "infacmd mrs Command Reference" chapter in the Informatica 10.2 Command Reference.

Big Data
This section describes new big data features in 10.2.

Big Data Management Installation


Effective in version 10.2, the Data Integration Service automatically installs the Big Data Management
binaries on the cluster.

When you run a mapping , the Data Integration Service checks for the binary files on the cluster. If they do not
exist or if they are not synchronized, the Data Integration Service prepares the files for transfer. It transfers
the files to the distributed cache through the Informatica Hadoop staging directory on HDFS. By default, the
staging directory is /tmp. This process replaces the requirement to install distribution packages on the
Hadoop cluster.

For more information, see the Informatica Big Data Management 10.2 Hadoop Integration Guide.

Cluster Configuration
A cluster configuration is an object in the domain that contains configuration information about the Hadoop
cluster. The cluster configuration enables the Data Integration Service to push mapping logic to the Hadoop
environment.

When you create the cluster configuration, you import cluster configuration properties that are contained in
configuration site files. You can import these properties directly from a cluster or from a cluster
configuration archive file. You can also create connections to associate with the cluster configuration.

Previously, you ran the Hadoop Configuration Manager utility to configure connections and other information
to enable the Informatica domain to communicate with the cluster.

For more information about cluster configuration, see the "Cluster Configuration" chapter in the Informatica
Big Data Management 10.2 Administrator Guide.

Processing Hierarchical Data


Effective in version 10.2, you can use complex data types, such as array, struct, and map, in mappings that
run on the Spark engine. With complex data types, the Spark engine directly reads, processes, and writes
hierarchical data in Avro, JSON, and Parquet complex files.

Develop mappings with complex ports, operators, and functions to perform the following tasks:

• Generate and modify hierarchical data.


• Transform relational data to hierarchical data.
• Transform hierarchical data to relational data.

28 Chapter 2: New Features (10.2)


• Convert data from one complex file format to another.
When you process hierarchical data, you can use hierarchical conversion wizards to simplify the mapping
development tasks. Use these wizards in the following scenarios:

• To generate hierarchical data of type struct from one or more ports.


• To generate hierarchical data of a nested struct type from ports in two transformations.
• To extract elements from hierarchical data in a complex port.
• To flatten hierarchical data in a complex port.
For more information, see the "Processing Hierarchical Data on the Spark Engine" chapter in the Informatica
Big Data Management 10.2 User Guide.

Stateful Computing on the Spark Engine


Effective in version 10.2, you can use window functions in an Expression transformation to perform stateful
calculations on the Spark engine. Window functions operate on a group of rows and calculate a single return
value for every input row. You can use window functions to perform the following tasks:

• Retrieve data from previous or subsequent rows.


• Calculate a cumulative sum based on a group of rows.
• Calculate a cumulative average based on a group of rows.
For more information, see the "Stateful Computing on the Spark Engine" chapter of the Big Data Management
10.2 User Guide.

Data Integration Service Queuing


Effective in version 10.2, if you deploy multiple mapping jobs or workflow mapping tasks at the same time,
the Data Integration Service queues the jobs in a persisted queue and runs the jobs when resources are
available. You can view the current status of mapping jobs on the Monitor tab of the Administrator tool.

All queues are persisted by default. If the Data Integration Service node shuts down unexpectedly, the queue
does not fail over when the Data Integration Service fails over. The queue remains on the Data Integration
Service machine, and the Data Integration Service resumes processing the queue when you restart it.

By default, each queue can hold 10,000 jobs at a time. When the queue is full, the Data Integration Service
rejects job requests and marks them as failed. When the Data Integration Service starts running jobs in the
queue, you can deploy additional jobs.

For more information, see the "Queuing" chapter in the Informatica Big Data Management 10.2 Administrator
Guide.

Blaze Job Monitor


Effective in version 10.2, you can configure the host and port number to start the Blaze Job Monitor
application in the Hadoop connection properties. The default value is <hostname>:9080. If you do not
configure the host name, the Blaze engine uses the first alphabetical node in the cluster.

For more information, see the "Connections" chapter in the Big Data Management 10.2 User Guide.

Big Data 29
Data Integration Service Properties for Hadoop Integration
Effective in version 10.2, the Data Integration Service added properties required to integrate the domain with
the Hadoop environment.

The following table describes the new properties:

Property Description

Hadoop Staging The HDFS directory where the Data Integration Services pushes Informatica Hadoop binaries and
Directory stores temporary files during processing. Default is /tmp.

Hadoop Staging Required if the Data Integration Service user is empty. The HDFS user that performs operations on
User the Hadoop staging directory. The user needs write permissions on Hadoop staging directory.
Default is the Data Integration Service user.

Custom Hadoop The local path to the Informatica Hadoop binaries compatible with the Hadoop operating system.
OS Path Required when the Hadoop cluster and the Data Integration Service are on different supported
operating systems.
Download and extract the Informatica binaries for the Hadoop cluster on the machine that hosts
the Data Integration Service. The Data Integration Service uses the binaries in this directory to
integrate the domain with the Hadoop cluster.
The Data Integration Service can synchronize the following operating systems:
- SUSE 11 and Redhat 6.5
Changes take effect after you recycle the Data Integration Service.

As a result of the changes in cluster integration, the following properties are removed from the Data
Integration Service:

• Informatica Home Directory on Hadoop


• Hadoop Distribution Directory

For more information, see the Informatica 10.2 Hadoop Integration Guide.

Sqoop
Effective in version 10.2, if you use Sqoop data objects, you can use the following specialized Sqoop
connectors to run mappings on the Spark engine:

• Cloudera Connector Powered by Teradata


• Hortonworks Connector for Teradata
These specialized connectors use native protocols to connect to the Teradata database.

For more information, see the Informatica Big Data Management 10.2 User Guide.

Autoscaling in an Amazon EMR Cluster


Effective in version 10.2, Big Data Management adds support for Spark mappings to take advantage of
autoscaling in an Amazon EMR cluster.

Autoscaling enables the EMR cluster administrator to establish threshold-based rules for adding and
subtracting cluster task and core nodes. Big Data Management certifies support for Spark mappings that run
on an autoscaling-enabled EMR cluster.

30 Chapter 2: New Features (10.2)


Transformation Support on the Blaze Engine
Effective in version 10.2, the following transformations have additional support on the Blaze engine

• Update Strategy. Supports targets that are ORC bucketed on all columns.
For more information, see the "Mapping Objects in a Hadoop Environment" chapter in the Informatica Big
Data Management 10.2 User Guide.

Hive Functionality for the Blaze Engine


Effective in version 10.2, mappings that run on the Blaze engine can read and write to bucketed and sorted
targets.

For information about how to configure mappings for the Blaze engine, see the "Mappings in a Hadoop
Environment" chapter in the Informatica Big Data Management 10.2 User Guide.

Transformation Support on the Spark Engine


Effective in version 10.2, the following transformations are supported with restrictions on the Spark engine:

• Normalizer
• Rank
• Update Strategy
Effective in version 10.2, the following transformations have additional support on the Spark engine:

• Lookup. Supports unconnected lookup from the Filter, Aggregator, Router, Expression, and Update
Strategy transformation.
For more information, see the "Mapping Objects in a Hadoop Environment" chapter in the Informatica Big
Data Management 10.2 User Guide.

Hive Functionality for the Spark Engine


Effective in version 10.2, the following functionality is supported for mappings that run on the Spark engine:

• Reading and writing to Hive resources in Amazon S3 buckets


• Reading and writing to transactional Hive tables
• Reading and writing to Hive table columns that are secured with fine-grained SQL authorization

For information about how to configure mappings for the Spark engine, see the "Mappings in a Hadoop
Environment" chapter in the Informatica Big Data Management 10.2 User Guide.

Command Line Programs


This section describes new commands in 10.2.

infacmd cluster Commands


cluster is a new infacmd plugin that performs operations on cluster configurations.

Command Line Programs 31


The following table describes new infacmd cluster commands:

Command Description

clearConfigurationProperties Clears overridden property values in the cluster configuration set.

createConfiguration Creates a new cluster configuration either from XML files or remote cluster
manager.

deleteConfiguration Deletes a cluster configuration from the domain.

exportConfiguration Exports a cluster configuration to a compressed file or a combined XML file.

listAssociatedConnections Lists connections by type that are associated with the specified cluster
configuration.

listConfigurationGroupPermissions Lists the permissions that a group has for a cluster configuration.

listConfigurationSets Lists configuration sets in the cluster configuration.

listConfigurationProperties Lists configuration properties in the cluster configuration set.

listConfigurations Lists cluster configuration names.

listConfigurationUserPermissions Lists the permissions that a user has for a cluster configuration.

refreshConfiguration Refreshes a cluster configuration either from XML files or remote cluster
manager.

setConfigurationPermissions Sets permissions on cluster configuration to a user or a group after removing


previous permissions.

setConfigurationProperties Sets overridden property values in the cluster configuration set.

For more information, see the "infacmd cluster Command Reference" chapter in the Informatica 10.2
Command Reference.

infacmd dis Options


The following table describes new Data Integration Service options for infacmd UpdateServiceOptions:

Command Description

[Link] The maximum number of deployed Hadoop jobs that can


run concurrently.

[Link] The maximum number of deployed native jobs that each


Data Integration Service process can run concurrently.

32 Chapter 2: New Features (10.2)


Command Description

[Link] The maximum number of on-demand jobs that can run


concurrently. Jobs include data previews, profiling jobs,
REST and SQL queries, web service requests, and
mappings run from the Developer tool.

[Link] The maximum number of threads that the Data


Integration Service can use to run parallel tasks between
a pair of inclusive gateways in a workflow. The default
value is 10.
If the number of tasks between the inclusive gateways is
greater than the maximum value, the Data Integration
Service runs the tasks in batches that the value
specifies.

For more information, see the "infacmd dis Command Reference" chapter in the Informatica 10.2 Command
Reference.

infacmd ipc Commands


The following table describes a new option for an infacmd ipc command:

Command Description

genReuseReportFromPC Contains the following new option:


-BlockSize: Optional. The number of mappings that you want to run the infacmd ipc
genReuseReportFromPC command against.

For more information, see the "infacmd ipc Command Reference" chapter in the Informatica 10.2 Command
Reference.

infacmd isp Commands


The following table describes changes to infacmd isp commands:

Command Description

createConnection Defines a connection and the connection options.


Added, changed, and removed Hadoop connection options. See infacmd isp
createConnection.

getDomainSamlConfig Renamed from getSamlConfig.


Returns the value of the cst option set for Secure Assertion Markup Language (SAML)
authentication. Specifies the allowed time difference between the Active Directory
Federation Services (AD FS) host system clock and the system clock on the master
gateway node.

Command Line Programs 33


Command Description

getUserActivityLog Returns user activity log data, which now includes successful and unsuccessful user
login attempts from Informatica clients.
The user activity data includes the following properties for each login attempt from an
Informatica client:
- Application name
- Application version
- Host name or IP address of the application host
If the client sets custom properties on login requests, the data includes the custom
properties.

listConnections Lists connection names by type. You can list by all connection types or filter the results
by one connection type.
The -ct option is now available for the command. Use the -ct option to filter connection
types.

purgeLog Purges log events and database records for license usage.
The -lu option is now obsolete.

SwitchToGatewayNode The following options are added for configuring SAML authentication:
- asca. The alias name specified when importing the identity provider assertion signing
certificate into the truststore file used for SAML authentication.
- saml. Enabled or disabled SAML authentication in the Informatica domain.
- std. The directory containing the custom truststore file required to use SAML
authentication on gateway nodes within the domain.
- stp. The custom truststore password used for SAML authentication.

For more information, see the "infacmd isp Command Reference" chapter in the Informatica 10.2 Command
Reference.

infacmd isp createConnection


This section lists new, changed, and removed Hadoop connection options for the property infacmd isp
createConnection in 10.2.

Hadoop Connection Options


The following tables describes new Hadoop connection options available in 10.2:

Option Description

clusterConfigId The cluster configuration ID associated with the Hadoop cluster.

blazeJobMonitorURL The host name and port number for the Blaze Job Monitor.

rejDirOnHadoop Enables hadoopRejDir. Used to specify a location to move reject files when you run
mappings.

hadoopRejDir The remote directory where the Data Integration Service moves reject files when you
run mappings. Enable the reject directory using rejDirOnHadoop.

34 Chapter 2: New Features (10.2)


Option Description

sparkEventLogDir An optional HDFS file path of the directory that the Spark engine uses to log events.

sparkYarnQueueName The YARN scheduler queue name used by the Spark engine that specifies available
resources on a cluster.

The following table describes Hadoop connection options that are renamed in 10.2:

Current Name Previous Name Description

blazeYarnQueueName cadiAppYarnQueueName The YARN scheduler queue name used by the


Blaze engine that specifies available resources
on a cluster. The name is case sensitive.

blazeExecutionParameterList cadiExecutionParameterList Custom properties that are unique to the Blaze


engine.

blazeMaxPort cadiMaxPort The maximum value for the port number range
for the Blaze engine.

blazeMinPort cadiMinPort The minimum value for the port number range
for the Blaze engine.

blazeUserName cadiUserName The owner of the Blaze service and Blaze


service logs.

blazeStagingDirectory cadiWorkingDirectory The HDFS file path of the directory that the
Blaze engine uses to store temporary files.

hiveStagingDatabaseName databaseName Namespace for Hive staging tables.

impersonationUserName hiveUserName Hadoop impersonation user. The user name


that the Data Integration Service impersonates
to run mappings in the Hadoop environment.

sparkStagingDirectory SparkHDFSStagingDir The HDFS file path of the directory that the
Spark engine uses to store temporary files for
running jobs.

The following table describes Hadoop connection options that are removed from the UI and imported into the
cluster configuration:

Option Description

RMAddress The service within Hadoop that submits requests for resources or
spawns YARN applications.
Imported into the cluster configuration as the property
[Link].

defaultFSURI The URI to access the default Hadoop Distributed File System.
Imported into the cluster configuration as the property
[Link] or [Link].

Command Line Programs 35


The following table describes Hadoop connection options that are deprecated in 10.2 and are no longer
available in the UI:

Option Description

metastoreDatabaseDriver* Driver class name for the JDBC data store.

metastoreDatabasePassword* The password for the metastore user name.

metastoreDatabaseURI* The JDBC connection URI used to access the data store in a local metastore
setup.

metastoreDatabaseUserName* The metastore database user name.

metastoreMode* Controls whether to connect to a remote metastore or a local metastore.

remoteMetastoreURI* The metastore URI used to access metadata in a remote metastore setup.
This property is imported into the cluster configuration as the property
[Link].

jobMonitoringURL The URL for the MapReduce JobHistory server.

* These properties are deprecated in 10.2. When you upgrade to 10.2, the property values you set in a previous release
are saved in the repository, but they do not appear in the connection properties.

The following properties are dropped. If they appear in connection strings, they will have no effect:

• hadoopClusterInfoExecutionParametersList
• passThroughSecurityEnabled
• hiverserver2Enabled
• hiveInfoExecutionParametersList
• cadiPassword
• sparkMaster
• sparkDeployMode

HBase Connection
The following table describes HBase connection options that are removed from the connection and imported
into the cluster configuration:

Property Description

ZOOKEEPERHOSTS Name of the machine that hosts the ZooKeeper server.

ZOOKEEPERPORT Port number of the machine that hosts the ZooKeeper server.

ISKERBEROSENABLED Enables the Informatica domain to communicate with the HBase


master server or region server that uses Kerberos authentication.

hbaseMasterPrincipal Service Principal Name (SPN) of the HBase master server.

hbaseRegionServerPrincipal Service Principal Name (SPN) of the HBase region server.

36 Chapter 2: New Features (10.2)


Hive Connection
The following table describes Hive connection options that are removed from the connection and imported
into the cluster configuration:

Property Description

defaultFSURI The URI to access the default Hadoop Distributed File System.

jobTrackerURI The service within Hadoop that submits the MapReduce tasks to
specific nodes in the cluster.

hiveWarehouseDirectoryOnHDFS The absolute HDFS file path of the default database for the
warehouse that is local to the cluster.

metastoreExecutionMode Controls whether to connect to a remote metastore or a local


metastore.

metastoreDatabaseURI The JDBC connection URI used to access the data store in a local
metastore setup.

metastoreDatabaseDriver Driver class name for the JDBC data store.

metastoreDatabaseUserName The metastore database user name.

metastoreDatabasePassword The password for the metastore user name.

remoteMetastoreURI The metastore URI used to access metadata in a remote metastore


setup.
This property is imported into the cluster configuration as the
property [Link].

HBase Connection Options for MapR-DB


The ISKERBEROSENABLED connection option is obsolete and imported into the cluster configuration.

infacmd mrs Commands


The following table describes new infacmd mrs commands:

Command Description

manageGroupPermission Manages permissions on multiple projects for a group.


OnProject

manageUserPermissionO Manages permissions on multiple projects for a user.


nProject

upgradeExportedObjects Upgrades objects exported to an .xml file from a previous Informatica release to the
current metadata format. The command generates an .xml file that contains the upgraded
objects.

For more information, see the "infacmd mrs Command Reference" chapter in the Informatica 10.2 Command
Reference.

Command Line Programs 37


infacmd ms Commands
The following table describes new infacmd ms commands:

Command Description

GetMappingStatus Gets the current status of a mapping job by job ID.

For more information, see the "infacmd ms Command Reference" chapter in the Informatica 10.2 Command
Reference.

infacmd wfs Commands


The following table describes new infacmd wfs commands:

Command Description

completeTask Completes a Human task instance that you specify.

delegateTask Assigns ownership of a Human task instance to a user or group.

listTasks Lists the Human task instances that meet the filter criteria that you specify.

releaseTask Releases a Human task instance from the current owner, and returns ownership of the task
instance to the business administrator that the workflow configuration identifies.

startTask Changes the status of a Human task instance to IN_PROGRESS.

For more information, see the "infacmd wfs Command Reference" chapter in the Informatica 10.2 Command
Reference.

infasetup Commands
The following table describes changes to infasetup commands:

Command Description

DefineDomain The following options are added for configuring Secure Assertion Markup Language (SAML)
authentication:
- asca. The alias name specified when importing the identity provider assertion signing
certificate into the truststore file used for SAML authentication.
- cst. The allowed time difference between the Active Directory Federation Services (AD FS)
host system clock and the system clock on the master gateway node.
- std. The directory containing the custom truststore file required to use SAML authentication
on gateway nodes within the domain.
- stp. The custom truststore password used for SAML authentication.

DefineGatewayNod The following options are added for configuring SAML authentication:
e - asca. The alias name specified when importing the identity provider assertion signing
certificate into the truststore file used for SAML authentication.
- saml. Enables or disables SAML authentication in the Informatica domain.
- std. The directory containing the custom truststore file required to use SAML authentication
on gateway nodes within the domain.
- stp. The custom truststore password used for SAML authentication.

38 Chapter 2: New Features (10.2)


Command Description

UpdateDomainSaml Renamed from UpdateSamlConfig.


Config The following option is added for configuring SAML authentication:
- cst. The allowed time difference between the AD FS host system clock and the system clock
on the master gateway node.

UpdateGatewayNod The following options are added for configuring SAML authentication.
e - asca. The alias name specified when importing the identity provider assertion signing
certificate into the truststore file used for SAML authentication.
- saml. Enables or disables SAML authentication in the Informatica domain.
- std. The directory containing the custom truststore file required to use SAML authentication
on gateway nodes within the domain.
- stp. The custom truststore password used for SAML authentication.

For more information, see the "infasetup Command Reference" chapter in the Informatica 10.2 Command
Reference.

pmrep Commands
The following table describes new pmrep commands:

Command Description

CreateQuery Creates a query in the repository.

DeleteQuery Deletes a query from the repository.

The following table describes updates to pmrep commands:

Command Description

CreateConnection Contains the following updated option:


-w. Enables you to use a parameter in the password option.

ListObjectDependencies Contains the following updated option:


-o. The object type list includes query and deploymentgroup.

UpdateConnection Contains the following updated options:


-w. Enables you to use a parameter in the password option.
-x. Disables the use of password parameters if you use the parameter in password.

For more information, see the "pmrep Command Reference" chapter in the Informatica 10.2 Command
Reference.

Data Types
This section describes new data type features in 10.2.

Data Types 39
Informatica Data Types
This section describes new data types in the Developer tool.

Complex Data Types


Effective in version 10.2, some transformations support complex data types in mappings that run on the
Spark engine.

The following table describes the complex data types you can use in transformations:

Complex Data Description


Type

array Contains an ordered collection of elements. All elements in the array must be of the same data
type. The elements can be of primitive or complex data type.

map Contains an unordered collection of key-value pairs. The key part must be of primitive data type.
The value part can be of primitive or complex data type.

struct Contains a collection of elements of different data types. The elements can be of primitive or
complex data types.

For more information, see the "Data Type Reference" appendix in the Informatica Big Data Management 10.2
User Guide.

Documentation
This section describes new or updated guides in 10.2.

The Informatica documentation contains the following changes:


Informatica Big Data Management Security Guide

Effective in version 10.2, the Informatica Big Data Management Security Guide is renamed to Informatica
Big Data Management Administrator Guide. It contains the security information and additional
administrator tasks for Big Data Management.

For more information see the Informatica Big Data Management 10.2 Administrator Guide.

Informatica Big Data Management Installation and Upgrade Guide

Effective in version 10.2, the Informatica Big Data Management Installation and Upgrade Guide is
renamed to Informatica Big Data Management Hadoop Integration Guide. Effective in version 10.2, the
Data Integration Service can automatically install the Big Data Management binaries to the Hadoop
cluster to integrate the domain with the cluster. The integration tasks in the guide do not include
installation of the distribution package.

For more information see the Informatica Big Data Management 10.2 Hadoop Integration Guide.

Informatica Catalog Administrator Guide

Effective in version 10.2, the Informatica Live Data Map Administrator Guide is renamed to Informatica
Catalog Administrator Guide.

For more information, see the Informatica Catalog Administrator Guide 10.2.

40 Chapter 2: New Features (10.2)


Informatica Administrator Reference for Enterprise Information Catalog

Effective in version 10.2, the Informatica Administrator Reference for Live Data Map is renamed to
Informatica Administrator Reference for Enterprise Information Catalog.

For more information, see the Informatica Administrator Reference for Enterprise Information Catalog
10.2.

Informatica Enterprise Information Catalog Custom Metadata Integration Guide

Effective in version 10.2, you can ingest custom metadata into the catalog using Enterprise Information
Catalog. You can see the new guide Informatica Enterprise Information Catalog 10.2 Custom Metadata
Integration Guide for more information.

Informatica Enterprise Information Catalog Installation and Configuration Guide

Effective in version 10.2, the Informatica Live Data Map Installation and Configuration Guide is renamed
to Informatica Enterprise Information Catalog Installation and Configuration Guide.

For more information, see the Informatica Enterprise Information Catalog 10.2 Installation and
Configuration Guide.

Informatica Enterprise Information Catalog REST API Reference

Effective in version 10.2, you can use REST APIs exposed by Enterprise Information Catalog. You can see
the new guide Informatica Enterprise Information Catalog 10.2 REST API Reference for more information.

Informatica Enterprise Information Catalog Upgrade Guide

Effective in version 10.2, the Informatica Live Data Map Upgrading from version <x> is renamed to
Informatica Enterprise Information Catalog Upgrading from versions 10.1, 10.1.1, 10.1.1 HF1, and 10.1.1
Update 2.

For more information, see the Informatica Enterprise Information Catalog Upgrading from versions 10.1,
10.1.1, 10.1.1 HF1, and 10.1.1 Update 2 guide..

Enterprise Information Catalog


This section describes new Enterprise Information Catalog features in 10.2.

New Data Sources


Effective in version 10.2, Informatica Enterprise Information Catalog allows you to extract metadata from
new data sources.

You can create resources in Informatica Catalog Administrator to extract metadata from the following data
sources:
Apache Atlas

Metadata framework for Hadoop.

Azure Microsoft SQL Data Warehouse

Cloud-based relational database to process a large volume of data.

Azure Microsoft SQL Server

Managed cloud database.

Azure WASB File Systems

Windows Azure Storage Blobs interface to load data to Azure blobs.

Enterprise Information Catalog 41


Erwin

Data modeling tool.

Informatica Axon

Enterprise data governance solution.

For more information about new resources, see the Informatica Catalog Administrator Guide 10.2.

Custom Scanner Framework


Effective in version 10.2, you can ingest custom metadata into the catalog.

Custom metadata is metadata that you define. You can define a custom model, create a custom resource
type, and create a custom resource to ingest custom metadata from a custom data source. You can use
custom metadata integration to extract and ingest metadata from custom data sources for which Enterprise
Information Catalog does not provide a model.

For more information about custom metadata integration, see the Informatica Enterprise Information Catalog
10.2 Custom Metadata Integration Guide.

REST APIs
Effective in version 10.2, you can use Informatica Enterprise Information Catalog REST APIs to access and
configure features related to the objects and models associated with a data source.

The REST APIs allow you to retrieve information related to objects and models associated with a data source.
In addition, you can create, update, or delete entities related to models and objects such as attributes,
associations, and classes.

For more information about unstructured file sources, see the Informatica Enterprise Information Catalog 10.2
REST API Reference.

Composite Data Domains


Effective in version 10.2, you can create composite data domains. A composite data domain is a collection of
data domains or other composite data domains that you can link using rules. You can use a composite data
domain to search for the required details of an entity across multiple schemas in a data source.

You can view composite data domains for tabular assets in the Asset Details view after you create and
enable composite data domain discovery for resources in the Catalog Administrator. You can also search for
composite data domains and view details of the composite data domains in the Asset Details view.

For more information about composite data domains, see the "View Assets" chapter in the Informatica
Enterprise Information Catalog 10.2 User Guide and see the "Catalog Administrator Concepts" and "Managing
Composite Data Domains" chapters in the Informatica Catalog Administrator Guide 10.2.

Data Domains
This section describes new features related to data domains in Enterprise Information Catalog.

Define Data Domains


Effective in version 10.2, you can configure the following additional options when you create a data domain:

• Use reference tables, rules, and regular expressions to create a data rule or column rule.
• Use minimum conformance percentage or minimum conforming rows for data domain match.

42 Chapter 2: New Features (10.2)


• Use the auto-accept option to accept a data domain automatically in Enterprise Information Catalog when
the data domain match exceeds the configured auto-accept percentage.
For more information about data domains in Catalog Administrator, see the "Managing Data Domains"
chapter in the Informatica Catalog Administrator Guide 10.2 .

Configure Data Domains


Effective in version 10.2, you can use predefined values or enter a conformance value for data domain match
when you create or edit a resource.

For more information about data domains and resources, see the "Managing Resources" chapter in the
Informatica Catalog Administrator Guide 10.2.

Data Domain Privileges


Effective in version 10.2, configure the Domain Management: Admin - View Domain and Domaingroup and
Domain Management: Admin - Edit Domain and Domaingroup privileges in Informatica Administrator to view,
create, edit, or delete data domains or data domain groups in the Catalog Administrator.

For more information about privileges see the "Privileges and Roles" chapter in the Informatica Administrator
Reference for Enterprise Information Catalog 10.2.

Data Domain Curation


Effective in version 10.2, Enterprise Information Catalog accepts a data domain automatically if the data
domain match percentage exceeds the configured auto-accept percentage in Catalog Administrator.

For more information about data domain curation, see the "View Assets" chapter in the Informatica Enterprise
Information Catalog 10.2 User Guide.

Export and Import of Custom Attributes


Effective in version 10.2, you can export the custom attributes configured in a resource to a CSV file and
import the CSV file back into Enterprise Information Catalog. You can use the exported CSV file to assign
custom attribute values to multiple assets at the same time.

For more information about export and import of custom attributes, see the "View Assets" chapter in the
Informatica Enterprise Information Catalog 10.2 User Guide.

Rich Text as Custom Attribute Value


Effective in version 10.2, you can edit a custom attribute to assign multiple rich text strings as the attribute
value.

For more information about assigning custom attribute values to an asset, see the "View Assets" chapter in
the Informatica Enterprise Information Catalog 10.2 User Guide.

Transformation Logic
Effective in version 10.2, you can view transformation logic for assets in the Lineage and Impact view. The
Lineage and Impact view displays transformation logic for assets that contain transformations. The
transformation view displays transformation logic for data structures, such as tables and columns. The view
also displays various types of transformations, such as filter, joiner, lookup, expression, sorter, union, and
aggregate.

For more information about transformation logic, see the "View Lineage and Impact" chapter in the
Informatica Enterprise Information Catalog 10.2 User Guide.

Enterprise Information Catalog 43


Unstructured File Types
Effective in version 10.2, you can run the Data Domain Discovery profile or Column Profile and Data Domain
Discovery profile on unstructured file types and extended unstructured formats for all the rows in the data
source. The unstructured file types include compressed files, email formats, webpage files, Microsoft Excel,
Microsoft PowerPoint, Microsoft Word, and PDF. The extended unstructured formats include mp3, mp4, bmp,
and jpg.

For more information about unstructured file types, see the "Managing Resources" chapter in the Informatica
Catalog Administrator Guide 10.2.

Value Frequency
Configure and View Value Frequency
Effective in version 10.2, you can enable value frequency along with column data similarity in the Catalog
Administrator to compute the frequency of values in a data source. You can view the value frequency for view
column, table column, CSV field, XML file field, and JSON file data assets in the Asset Details view after you
run the value frequency on a data source in the Catalog Administrator.

For more information about configuring value frequency, see the "Catalog Administrator Concepts" chapter in
the Informatica Catalog Administrator Guide 10.2 . To view value frequency for a data asset, see the "View
Assets" chapter in the Informatica Enterprise Information Catalog 10.2 User Guide.

Privileges to View Value Frequency in Enterprise Information Catalog


Effective in version 10.2, you need the following permission and privileges to view the value frequency for a
data asset:

• Read permission for the data asset.


• Data Privileges: View Data privilege.
• Data Privileges: View Sensitive Data privilege.

For more information about permissions and privileges, see the "Permissions Overview" and "Privileges and
Roles Overview" chapter in the Informatica Administrator Reference for Enterprise Information Catalog 10.2 .

Deployment Support for Azure HDInsight


Effective in version 10.2, you can deploy Enterprise Information Catalog on Azure HDInsight Hadoop
distribution.

For more information, see the "Create the Application Services" chapter in the Informatica Enterprise
Information Catalog 10.2 Installation and Configuration Guide.

Informatica Analyst
This section describes new Analyst tool features in 10.2.

44 Chapter 2: New Features (10.2)


Profiles
This section describes new features for profiles and scorecards.

Rule Specification
Effective in version 10.2, you can configure a rule specification in the Analyst tool and use the rule
specification in the column profile.

For more information about using rule specifications in the column profiles, see the "Rules in Informatica
Analyst" chapter in the Informatica 10.2 Data Discovery Guide.

Intelligent Data Lake


This section describes new Intelligent Data Lake features in 10.2.

Validate and Assess Data Using Visualization with Apache


Zeppelin
Effective in version 10.2, after you publish data, you can validate your data visually to make sure that the data
is appropriate for your analysis from content and quality perspectives. You can then choose to fix the recipe
thus supporting an iterative Prepare-Publish-Validate process.

Intelligent Data Lake uses Apache Zeppelin to view the worksheets in the form of a visualization Notebook
that contains graphs and charts. For more details about Apache Zeppelin, see Apache Zeppelin
documentation. When you visualize data using Zeppelin's capabilities, you can view relationships between
different columns and create multiple charts and graphs.

When you open the visualization Notebook for the first time after a data asset is published, Intelligent Data
Lake uses CLAIRE engine to create Smart Visualization suggestions in the form of histograms of the numeric
columns created by the user.

For more information about the visualization notebook, see the "Validate and Assess Data Using
Visualization with Apache Zeppelin" chapter in the Informatica Intelligent Data Lake 10.2 User Guide.

Assess Data Using Filters During Data Preview


Effective in version 10.2, you can filter the data during data preview for better assessment of data assets.
You can add filters for multiple fields and apply combinations of such filters. Filter conditions depend on the
data types. If available, you can view column value frequencies found during profiling for string values.

For more information, see the "Discover Data" chapter in the Informatica Intelligent Data Lake 10.2 User Guide.

Enhanced Layout of Recipe Panel


Effective in version 10.2, you can see a dedicated panel for Recipe steps during data preparation. The recipe
steps are clearer and concise with color codes to indicate function name, columns involved, and input
sources. You can edit the steps or delete them. You can also go back-in-time to a specific step in the recipe
and see the state of data. You can refresh the recipe from the source. You can also see a separate
Ingredients panel which shows the sources used for this sheet.

For more information, see the "Prepare Data" chapter in the Informatica Intelligent Data Lake 10.2 User Guide.

Intelligent Data Lake 45


Apply Data Quality Rules
Effective in version 10.2, while preparing data, you can use pre-built rules that are available during interactive
data preparation. These rules are created using Informatica Developer or Informatica Analyst tool. If you have
a Big Data Quality license, thousands of pre-built rules are available that can be used by Intelligent Data Lake
users as well. Using pre-built rules promotes effective collaboration within Business and IT with reusability of
rules and knowledge, consistency of usage and extensibility.

For more information, see the "Prepare Data" chapter in the Informatica Intelligent Data Lake 10.2 User Guide.

View Business Terms for Data Assets in Data Preview and


Worksheet View
Effective in version 10.2, you can view business terms associated with columns of data assets in data
preview as well as during data preparation.

For more information, see the "Discover Data" chapter in the Informatica Intelligent Data Lake 10.2 User Guide.

Prepare Data for Delimited Files


Effective in version 10.2, as a data analyst, you can cleanse, transform, combine, aggregate, and perform
other operations on delimited HDFS files that are already in the lake. You can preview these files before
adding them to a project. You can then configure the sampling settings of these assets and perform data
preparation operations on them.

For more information, see the "Prepare Data" chapter in the Informatica Intelligent Data Lake 10.2 User Guide.

Edit Joins in a Joined Worksheet


Effective in version 10.2, you can edit the joinconditions for an existing joined worksheet such as join keys,
join types (such as inner and outer joins).

For more information, see the "Prepare Data" chapter in the Informatica Intelligent Data Lake User Guide.

Edit Sampling Settings for Data Preparation


Effective in version 10.2, you can edit the sampling settings while preparing your data asset. You can change
the columns selected for sampling, edit the filters selected, and change the sampling criteria.

For more information, see the "Prepare Data" chapter in the Informatica Intelligent Data Lake 10.2 User Guide.

Support for Multiple Enterprise Information Catalog Resources in


the Data Lake
Effective in version 10.2, you can configure multiple Enterprise Information Catalog resources so that the
users can work with all types of assets and all applicable Hive schemas in the lake.

Use Oracle for the Data Preparation Service Repository


Effective in version 10.2, you can now use Oracle 11gR2 and 12c for the Data Preparation Service repository.

46 Chapter 2: New Features (10.2)


Improved Scalability for the Data Preparation Service
Effective in version 10.2, you can ensure horizontal scalability by using grid for the Data Preparation Service
with multiple Data Preparation Service nodes. Improved scalability supports high performance, interactive
data preparation during increased data volumes and increased number of users.

Informatica Developer
This section describes new Developer tool features in 10.2.

Nonrelational Data Objects


Effective in version 10.2, you can import multiple nonrelational data objects at a time.

For more information, see the "Physical Data Objects" chapter in the Informatica 10.2 Developer Tool Guide.

Profiles
This section describes new features for profiles and scorecards.

Rule Specification
Effective in version 10.2, you can use rule specifications when you create a column profile in the Developer
tool. To use the rule specification, generate a mapplet from the rule specification and validate the mapplet as
a rule.

For more information about using rule specifications in the column profiles, see the "Rules in Informatica
Developer" chapter in the Informatica 10.2 Data Discovery Guide.

Informatica Installation
This section describes new installation features in 10.2.

Informatica Upgrade Advisor


Effective in version 10.2, you can run the Informatica Upgrade Advisor to validate the services and check for
obsolete services, supported databases, and supported operating systems in the domain before you perform
an upgrade.

For more information about the upgrade advisor, see the Informatica Upgrade Guides.

Intelligent Streaming
This section describes new Intelligent Streaming features in 10.2.

Informatica Developer 47
CSV Format
Effective in version 10.2, Streaming mappings can read and write data in CSV format.

For more information about the CSV format, see the "Sources and Targets in a Streaming Mapping" chapter in
the Informatica Intelligent Streaming 10.2 User Guide.

Data Types
Effective in version 10.2, Streaming mappings can read, process, and write hierarchical data. You can use
array, struct, and map complex data types to process the hierarchical data.

For more information, see the "Sources and Targets in a Streaming Mapping" chapter in the Informatica
Intelligent Streaming 10.2 User Guide.

Connections
Effective in version 10.2, you can use the following new messaging connections in Streaming mappings:

• AmazonKinesis. Access Amazon Kinesis Stream as source or Amazon Kinesis Firehose as target. You can
create and manage an AmazonKinesis connection in the Developer tool or through infacmd.
• MapRStreams. Access MapRStreams as targets. You can create and manage a MapRStreams connection
in the Developer tool or through infacmd.

For more information, see the "Connections" chapter in the Informatica Intelligent Streaming 10.2 User Guide.

Pass-Through Mappings
Effective in version 10.2, you can pass any payload format directly from source to target in Streaming
mappings.

You can project columns in binary format to pass a payload from source to target in its original form or to
pass a payload format that is not supported.

For more information, see the "Sources and Targets in a Streaming Mapping" chapter in the Informatica
Intelligent Streaming 10.2 User Guide.

Sources and Targets


Effective in version 10.2, you can create the following new physical data objects:

• AmazonKinesis. Represents data in a Amazon Kinesis Stream or Amazon Kinesis Firehose Delivery
Stream.
• MapRStreams. Represents data in a MapR Stream.

For more information, see the "Sources and Targets in a Streaming Mapping" chapter in the Informatica
Intelligent Streaming 10.2 User Guide.

Transformation Support
Effective in version 10.2, you can use the Rank transformation with restrictions in Streaming mappings.

For more information, see the "Intelligent Streaming Mappings" chapter in the Informatica Intelligent
Streaming 10.2 User Guide.

48 Chapter 2: New Features (10.2)


Metadata Manager
This section describes new Metadata Manager features in 10.2.

Cloudera Navigator
Effective in version 10.2, you can provide the truststore file information to enable a secure connection to a
Cloudera Navigator resource. When you create or edit a Cloudera Navigator resource, enter the path and file
name of the truststore file for the Cloudera Navigator SSL instance and the password of the truststore file.

For more information about creating a Cloudera Navigator Resource, see the "Database Management
Resources" chapter in the Informatica Metadata Manager 10.2 Administrator Guide.

PowerCenter
This section describes new PowerCenter features in 10.2.

Audit Logs
Effective in version 10.2, you can generate audit logs when you import an .xml file into the PowerCenter
repository. When you import one or more repository objects, you can generate audit logs. You can enable
Security Audit Trail configuration option in the PowerCenter Repository Service properties in the
Administrator tool to generate audit logs when you import an .xml file into the PowerCenter repository. The
user activity logs captures all the audit messages.

The audit logs contain the following information about the file, such as the file name and size, the number of
objects imported, and the time of the import operation.

For more information, see the "pmrep Command Reference" chapter in the Informatica 10.2 Command
Reference, the Informatica 10.2 Application Service Guide, and the Informatica 10.2 Administrator Guide.

Bulk Upsert for SAP HANA Targets


Effective in version 10.2, when you upsert data into SAP HANA targets, you can configure the
EnableArrayUpsert custom property to upsert data in bulk and improve the session performance. You can
configure the EnableArrayUpsert custom property at the session level or at the PowerCenter Integration
Service level, and set its value to yes.

For more information, see the "Working with Targets" chapter in the Informatica 10.2 PowerCenter Designer
Guide.

Object Queries
Effective in version 10.2, you can create and delete object queries with the pmrep commands.

For more information, see the "pmrep Command Reference" chapter in the Informatica 10.2 Command
Reference.

Use Parameter in a Password


Effective in version 10.2, you can create or update a connection with a parameter in password with the pmrep
commands.

You can also update a connection with or without a parameter in password with the pmrep command.

For more information, see the "pmrep Command Reference" chapter in the Informatica 10.2 Command
Reference.

Metadata Manager 49
PowerExchange Adapters
This section describes new PowerExchange adapter features in 10.2.

PowerExchange Adapters for Informatica


This section describes new Informatica adapter features in 10.2.

PowerExchange for Amazon Redshift


Effective in version 10.2, PowerExchange for Amazon Redshift includes the following new features:

• You can read data from or write data to the Amazon S3 buckets in the following regions:
- Asia Pacific (Mumbai)

- Asia Pacific (Seoul)

- Canada (Central)

- China(Beijing)

- EU (London)

- US East (Ohio)
• You can run Amazon Redshift mappings on the Spark engine. When you run the mapping, the Data
Integration Service pushes the mapping to a Hadoop cluster and processes the mapping on the Spark
engine, which significantly increases the performance.
• You can use AWS Identity and Access Management (IAM) authentication to securely control access to
Amazon S3 resources.
• You can connect to Amazon Redshift Clusters available in Virtual Private Cloud (VPC) through VPC
endpoints.
• You can use AWS Identity and Access Management (IAM) authentication to run a session on the EMR
cluster.

For more information, see the Informatica PowerExchange for Amazon Redshift 10.2 User Guide.

PowerExchange for Amazon S3


Effective in version 10.2, PowerExchange for Amazon S3 includes the following new features:

• You can read data from or write data to the Amazon S3 buckets in the following regions:
- Asia Pacific (Mumbai)

- Asia Pacific (Seoul)

- Canada (Central)

- China (Beijing)

- EU (London)

- US East (Ohio)

50 Chapter 2: New Features (10.2)


• You can compress data in the following formats when you read data from or write data to Amazon S3 in
the native environment and Spark engine:

Compression format Read Write

Bzip2 Yes Yes

Deflate No Yes

Gzip Yes Yes

Lzo Yes Yes

None Yes Yes

Snappy No Yes

• You can select the type of source from which you want to read data in the Source Type option under the
advanced properties for an Amazon S3 data object read operation. You can select Directory or File source
types.
• You can select the type of the data sources in the Resource Format option under the Amazon S3 data
objects properties. You can read data from the following source formats:
- Binary

- Flat

- Avro

- Parquet
• You can connect to Amazon S3 buckets available in Virtual Private Cloud (VPC) through VPC endpoints.
• You can run Amazon S3 mappings on the Spark engine. When you run the mapping, the Data Integration
Service pushes the mapping to a Hadoop cluster and processes the mapping on the Spark engine.
• You can choose to overwrite the existing files. You can select the Overwrite File(s) If Exists option in the
Amazon S3 data object write operation properties to overwrite the existing files.
• You can use AWS Identity and Access Management (IAM) authentication to securely control access to
Amazon S3 resources.
• You can filter the metadata to optimize the search performance in the Object Explorer view.
• You can use AWS Identity and Access Management (IAM) authentication to run a session on the EMR
cluster.

For more information, see the Informatica PowerExchange for Amazon S3 10.2 User Guide.

PowerExchange for HBase


Effective in version 10.2, PowerExchange for HBase contains the following new features:

• You can use PowerExchange for HBase to read from sources and write to targets stored in the WASB file
system on Azure HDInsight.
• You can associate a cluster configuration with an HBase connection. A cluster configuration is an object
in the domain that contains configuration information about the Hadoop cluster. The cluster configuration
enables the Data Integration Service to push mapping logic to the Hadoop environment.

For more information, see the Informatica PowerExchange for HBase 10.2 User Guide.

PowerExchange Adapters 51
PowerExchange for HDFS
Effective in version 10.2, you can associate a cluster configuration with an HDFS connection. A cluster
configuration is an object in the domain that contains configuration information about the Hadoop cluster.
The cluster configuration enables the Data Integration Service to push mapping logic to the Hadoop
environment.

For more information, see the Informatica PowerExchange for HDFS 10.2 User Guide.

PowerExchange for Hive


Effective in version 10.2, you can associate a cluster configuration with an Hive connection. A cluster
configuration is an object in the domain that contains configuration information about the Hadoop cluster.
The cluster configuration enables the Data Integration Service to push mapping logic to the Hadoop
environment.

For more information, see the Informatica PowerExchange for Hive 10.2 User Guide.

PowerExchange for MapR-DB


Effective in version 10.2, PowerExchange for MapR-DB contains the following new features:

• You can run MapR-DB mappings on the Spark engine. When you run the mapping, the Data Integration
Service pushes the mapping to a Hadoop cluster and processes the mapping on the Spark engine, which
significantly increases the performance.
• You can configure dynamic partitioning for MapR-DB mappings that you run on the Spark engine.
• You can associate a cluster configuration with an HBase connection for MapR-DB. A cluster configuration
is an object in the domain that contains configuration information about the Hadoop cluster. The cluster
configuration enables the Data Integration Service to push mapping logic to the Hadoop environment.
For more information, see the Informatica PowerExchange for MapR-DB 10.2 User Guide.

PowerExchange for Microsoft Azure Blob Storage


Effective in version 10.2, you can read data from or write data to a subdirectory in Microsoft Azure Blob
Storage. You can use the Blob Container Override and Blob Name Override fields to read data from or write
data to a subdirectory in Microsoft Azure Blob Storage.

For more information, see the Informatica PowerExchange for Microsoft Azure Blob Storage 10.2 User Guide.

PowerExchange for Microsoft Azure SQL Data Warehouse


Effective in version 10.2, you can run Microsoft Azure SQL Data Warehouse mappings in a Hadoop
environment on Kerberos enabled clusters.

For more information, see the Informatica PowerExchange for Microsoft Azure SQL Data Warehouse 10.2 User
Guide.

PowerExchange for Salesforce


Effective in version 10.2, you can use version 39 of Salesforce API to create a Salesforce connection and
access Salesforce objects.

For more information, see the Informatica PowerExchange for Salesforce 10.2 User Guide.

PowerExchange Adapters for PowerCenter


This section describes new PowerCenter adapter features in version 10.2.

52 Chapter 2: New Features (10.2)


PowerExchange for Amazon Redshift
Effective in version 10.2, PowerExchange for Amazon Redshift includes the following new features:

• You can read data from or write data to the China (Beijing) region.
• When you import objects from AmazonRSCloudAdapter in the PowerCenter Designer, the PowerCenter
Integration Service lists the table names alphabetically.
• In addition to the existing recovery options in the vacuum table, you can select the Reindex option to
analyze the distribution of the values in an interleaved sort key column.
• You can configure the multipart upload option to upload a single object as a set of independent parts.
TransferManager API uploads the multiple parts of a single object to Amazon S3. After uploading,
Amazon S3 assembles the parts and creates the whole object. TransferManager API uses the multipart
uploads option to achieve performance and increase throughput when the content size of the data is large
and the bandwidth is high.
You can configure the Part Size and TransferManager Thread Pool Size options in the target session
properties.
• PowerExchange for Amazon Redshift uses the [Link] file to address potential security
issues when accessing properties. The following is the location of the [Link] file:
&lt;Informatica installation directory&gt;server/bin/javalib/505100/commons-
[Link]
For more information, see the Informatica PowerExchange for Amazon Redshift 10.2 User Guide for
PowerCenter.

PowerExchange for Amazon S3


Effective in version 10.2, PowerExchange for Amazon S3 includes the following new features:

• You can read data from or write data to the China (Beijing) region.
• You can read multiple files from Amazon S3 and write data to a target.
• You can write multiple files to Amazon S3 target from a single source. You can configure the Distribution
Column options in the target session properties.
• When you create a mapping task to write data to Amazon S3 targets, you can configure partitions to
improve performance. You can configure the Merge Partition Files option in the target session properties.
• You can specify a directory path that is available on the PowerCenter Integration Service in the Staging
File Location property.
• You can configure the multipart upload option to upload a single object as a set of independent parts.
TransferManager API uploads the multiple parts of a single object to Amazon S3. After uploading,
Amazon S3 assembles the parts and creates the whole object. TransferManager API uses the multipart
uploads option to achieve performance and increase throughput when the content size of the data is large
and the bandwidth is high.
You can configure the Part Size and TransferManager Thread Pool Size options in the target session
properties.
For more information, see the Informatica PowerExchange for Amazon S3 version 10.2 User Guide for
PowerCenter.

PowerExchange for Microsoft Dynamics CRM


Effective in version 10.2, you can use the following target session properties with PowerExchange for
Microsoft Dynamics CRM:

• Add row reject reason. Select to include the reason for rejection of rows to the reject file.

PowerExchange Adapters 53
• Alternate Key Name. Indicates whether the column is an alternate key for an entity. Specify the name of
the alternate key. You can use alternate key in update and upsert operations.
• You can configure PowerExchange for Microsoft Dynamics CRM to run on AIX platform.
For more information, see the Informatica PowerExchange for Microsoft Dynamics CRM 10.2 User Guide for
PowerCenter.

PowerExchange for SAP NetWeaver


Effective in version 10.2, PowerExchange for SAP NetWeaver includes the following new features:

• When you run ABAP mappings to read data from SAP tables, you can use the STRING, SSTRING, and
RAWSTRING data types. The SSTRING data type is represented as SSTR in PowerCenter.
• When you read or write data through IDocs, you can use the SSTRING data type.
• When you run ABAP mappings to read data from SAP tables, you can configure HTTP streaming.
For more information, see the Informatica PowerExchange for SAP NetWeaver 10.2 User Guide for
PowerCenter.

Rule Specifications
Effective in version 10.2, you can select a rule specification from the Model repository in Informatica
Developer and add the rule specification to a mapping. You can also deploy a rule specification as a web
service.

A rule specification is a read-only object in the Developer tool. Add a rule specification to a mapping in the
same way that you add a mapplet to a mapping. You can continue to select a mapplet that you generated
from a rule specification and add the mapplet to a mapping.

Add a rule specification to a mapping when you want the mapping to apply the logic that the current rule
specification represents. Add the corresponding mapplet to a mapping when you want to use or update the
mapplet logic independently of the rule specification.

When you add a rule specification to a mapping, you can specify the type of outputs on the rule specification.
By default, a rule specification has a single output port that contains the final result of the rule specification
analysis for each input data row. You can configure the rule specification to create an output port for every
rule set in the rule specification.

For more information, see the "Mapplets" chapter in the Informatica 10.2 Developer Mapping Guide.

Security
This section describes new security features in 10.2.

54 Chapter 2: New Features (10.2)


User Activity Logs
Effective in version 10.2, you can view login attempts from Informatica client applications in user activity
logs.

The user activity data includes the following properties for each login attempt from an Informatica client:

• Application name
• Application version
• Host name or IP address of the application host
If the client set custom properties on login requests, the data includes the custom properties.

For more information, see the "Users and Groups" chapter in the Informatica 10.2 Security Guide.

Transformation Language
This section describes new transformation language features in 10.2.

Informatica Transformation Language


This section describes Informatica Transformation Language new features in 10.2.

Complex Functions
Effective in version 10.2, the transformation language introduces complex functions for complex data types.
Use complex functions to process hierarchical data on the Spark engine.

The transformation language includes the following complex functions:

• ARRAY
• CAST
• COLLECT_LIST
• CONCAT_ARRAY
• RESPEC
• SIZE
• STRUCT
• STRUCT_AS
For more information about complex functions, see the "Functions" chapter in the Informatica 10.2 Developer
Transformation Language Reference.

Complex Operators
Effective in version 10.2, the transformation language introduces complex operators for complex data types.
In mappings that run on the Spark engine, use complex operators to access elements of hierarchical data.

The transformation language includes the following complex operators:

• Subscript operator [ ]
• Dot operator .

Transformation Language 55
For more information about complex operators, see the "Operators" chapter in the Informatica 10.2 Developer
Transformation Language Reference.

Window Functions
Effective in version 10.2, the transformation language introduces window functions. Use window functions to
process a small subset of a larger set of data on the Spark engine.

The transformation language includes the following window functions:

• LEAD. Provides access to a row at a given physical offset that comes after the current row.
• LAG. Provides access to a row at a given physical offset that comes before the current row.
For more information, see the "Functions" chapter in the Informatica 10.2 Transformation Language
Reference.

Transformations
This section describes new transformation features in version 10.2.

Informatica Transformations
This section describes new features in Informatica transformations in 10.2.

Address Validator Transformation


This section describes the new Address Validator transformation features.

The Address Validator transformation contains additional address functionality for the following countries:

Austria
Effective in version 10.2, you can configure the Address Validator transformation to return a postal address
code identifier for a mailbox that has two valid street addresses. For example, a building at an intersection of
two streets might have an address on both streets. The building might prefer to receive mail at one of the
addresses. The other address remains a valid address, but the postal carrier does not use it to deliver mail.

Austria Post assigns a postal address code to both addresses. Austria Post additionally assigns a postal
address code identifier to the address that does not receive mail. The postal address code identifier is
identical to the postal address code of the preferred address. You can use the postal address code identifier
to look up the preferred address with the Address Validator transformation.

To find the postal address code identifier for an address in Austria, select the Postal Address Code Identifier
AT output port. Find the port in the AT Supplementary port group.

To find the address that a postal address identifier represents, select the Postal Address Code Identifier AT
input port. Find the port in the Discrete port group.

Czech Republic
Effective in version 10.2, you can configure the Address Validator transformation to add RUIAN ID values to a
valid Czech Republic address.

56 Chapter 2: New Features (10.2)


You can find the following RUIAN ID values:

• RUIANAM_ID. Uniquely identifies the address delivery point.


To find the RUIAN ID value that uniquely identifies the address delivery point, select the RUIAN Delivery
Point Identifier output port.
• RUIANSO_ID. Identifies the address to building level.
To find the RUIAN ID value that identifies the address to building level, select the RUIAN Building Identifier
output port.
• RUIANTEA_ID. Identifies the building entrance.
To find the RUIAN ID value that identifies the entrance to building, select the RUIAN Building Entrance
Identifier output port.
Find the ports in the CZ Supplementary port group.

Hong Kong
The Address Validator transformation includes the following features for Hong Kong:

Multilanguage support for Hong Kong addresses

Effective in version 10.2, the Address Validator transformation can read and write Hong Kong addresses
in Chinese or in English.

Use the Preferred Language property to select the preferred language for the addresses that the
transformation returns. The default language is Chinese. To return Hong Kong addresses in English,
update the property to ENGLISH.

Use the Preferred Script property to select the preferred character set for the address data. The default
character set is Hanzi. To return Hong Kong addresses in Latin characters, update the property to a Latin
or ASCII option. When you select a Latin script, address validation transliterates the address data into
Pinyin.

Single-line address validation in suggestion list mode

Effective in version 10.2, you can configure the Address Validator transformation to return valid
suggestions for a Hong Kong address that you enter on a single line. To return the suggestions,
configure the transformation to run in suggestion list mode.

Submit the address in the native Chinese language and in the Hanzi script. The Address Validator
transformation reads the address in the Hanzi script and returns the address suggestions in the Hanzi
script.
Submit a Hong Kong address in the following format:
[Province] [Locality] [Street] [House Number] [Building 1] [Building 2] [Sub-
building]
When you submit a partial address, the transformation returns one or more address suggestions for the
address that you enter. When you enter a complete or almost complete address, the transformation
returns a single suggestion for the address that you enter.

To verify single-line addresses, use the Complete Address port.

Macau
The Address Validator transformation includes the following features for Macau:

Multilanguage support for Macau addresses

Effective in version 10.2, the Address Validator transformation can read and write Macau addresses in
Chinese or in Portuguese.

Transformations 57
Use the Preferred Language property to select the preferred language for the addresses that the
transformation returns. The default language is Chinese. To return Macau addresses in Portuguese,
update the property to ALTERNATIVE_2.

Use the Preferred Script property to select the preferred character set for the address data. The default
character set is Hanzi. To return Macau addresses in Latin characters, update the property to a Latin or
ASCII option.

Note: When you select a Latin script with the default preferred language option, address validation
transliterates the Chinese address data into Cantonese or Mandarin. When you select a Latin script with
the ALTERNATIVE_2 preferred language option, address validation returns the address in Portuguese.

Single-line address verification for native Macau addresses in suggestion list mode

Effective in version 10.2, you can configure the Address Validator transformation to return valid
suggestions for a Macau address that you enter on a single line in suggestion list mode. When you enter
a partial address in suggestion list mode, the transformation returns one or more address suggestions
for the address that you enter. Submit the address in the Chinese language and in the Hanzi script. The
transformation returns address suggestions in the Chinese language and in the Hanzi script. Enter a
Macau address in the following format:
[Locality] [Street] [House Number] [Building]
Use the Preferred Language property to select the preferred language for the addresses. The default
preferred language is Chinese. Use the Preferred Script property to select the preferred character set for
the address data. The default preferred script is Hanzi. To verify single-line addresses, enter the
addresses in the Complete Address port.

Taiwan
Effective in version 10.2, you can configure the Address Validator transformation to return a Taiwan address
in the Chinese language or the English language.

Use the Preferred Language property to select the preferred language for the addresses that the
transformation returns. The default language is traditional Chinese. To return Taiwan addresses in English,
update the property to ENGLISH.

Use the Preferred Script property to select the preferred character set for the address data. The default
character set is Hanzi. To return Taiwan addresses in Latin characters, update the property to a Latin or ASCII
option.

Note: The Taiwan address structure in the native script lists all address elements in a single line. You can
submit the address as a single string in a Formatted Address Line port.

When you format an input address, enter the elements in the address in the following order:
Postal Code, Locality, Dependent Locality, Street, Dependent Street, House or Building
Number, Building Name, Sub-Building Name

United States
The Address Validator transformation includes the following features for the United States:

Support for the Secure Hash Algorithm-compliant versions of CASS data files

Effective in version 10.2, the Address Validator transformation reads CASS certification data files that
comply with the SHA-256 standard.

The current CASS certification files are numbered [Link] through [Link]. To verify United
States addresses in certified mode, you must use the current files.

Note: The SHA-256-compliant files are not compatible with older versions of Informatica.

58 Chapter 2: New Features (10.2)


Support for Door Not Accessible addresses in certified mode

Effective in version 10.2, you can configure the Address Validator transformation to identify United
States addresses that do not provide a door or entry point for a mail carrier. The mail carrier might be
unable to deliver a large item to the address.

The United States Postal Service maintains a list of addresses for which a mailbox is accessible but for
which a physical entrance is inaccessible. For example, a residence might locate a mailbox outside a
locked gate or on a rural route. The address reference data includes the list of inaccessible addresses
that the USPS recognizes. Address validation can return the accessible status of an address when you
verify the address in certified mode.

To identify DNA addresses, select the Delivery Point Validation Door not Accessible port. Find the port in
the US Specific port group.

Support for No Secure Location address in certified mode

Effective in version 10.2, you can configure the Address Validator transformation to identify United
States addresses that do not provide a secure mailbox or reception point for mail. The mail carrier might
be unable to deliver a large item to the address.

The United States Postal Service maintains a list of addresses at which the mailbox is not secure. For
example, a retail store is not a secure location if the mail carrier can enter the store but cannot find a
mailbox or an employee to receive the mail. The address reference data includes the list of non-secure
addresses that the USPS recognizes. Address validation can return the non-secure status of an address
when you verify the address in certified mode.

To identify DNA addresses, select the Delivery Point Validation No Secure Location port. Find the port in
the US Specific port group.

Support for Post Office Box Only Delivery Zones

Effective in version 10.2, you can configure the Address Validator transformation to identify ZIP Codes
that contain post office box addresses and no other addresses. When all of the addresses in a ZIP Code
are post office box addresses, the ZIP Code represents a Post Office Box Only Delivery Zone.

The Address Validator transformation adds the value Y to an address to indicate that it contains a ZIP
Code in a Post Office Box Only Delivery Zone. The value enables the postal carrier to sort mail more
easily. For example, the mailboxes in a Post Office Box Only Delivery Zone might reside in a single post
office building. The postal carrier can deliver all mail to the Post Office Box Only Delivery Zone in a single
trip.

To identify Post Office Box Only Delivery Zones, select the Post Office Box Delivery Zone Indicator port.
Find the port in the US Specific port group.

For more information, see the Informatica 10.2 Developer Transformation Guide and the Informatica 10.2
Address Validator Port Reference.

Data Processor Transformation


This section describes new Data Processor transformation features.

JsonStreamer
Use the JsonStreamer object in a Data Processor transformation to process large JSON files. The
transformation splits very large JSON files into complete JSON messages. The transformation can then call
other Data Processor transformation components, or a Hierarchical to Relational transformation, to complete
the processing.

For more information, see the "Streamers" chapter in the Informatica Data Transformation 10.2 User Guide.

Transformations 59
RunPCWebService
Use the RunPCWebService action to call a PowerCenter mapplet from within a Data Processor
transformation.

For more information, see the "Actions" chapter in the Informatica Data Transformation 10.2 User Guide.

PowerCenter Transformations

Evaluate Expression
Effective in version 10.2, you can evaluate expressions that you configure in the Expression Editor of an
Expression transformation. When you test an expression, you can enter sample data and then evaluate the
expression.

For more information about evaluating an expression, see the "Working with Transformations" chapter and
the "Expression Transformation" chapter in the Informatica PowerCenter 10.2 Transformation Guide.

Workflows
This section describes new workflow features in version 10.2.

Informatica Workflows
This section describes new features in Informatica workflows in 10.2.

Human Task Distribution Properties


Effective in version 10.2, you can store a list of the users or groups who can work on Human task instances
in an external database table. You select the table when you configure the Human task to define task
instances based on the values in a column of source data.

The table identifies the users or groups who can work on the task instances and specifies the column values
to associate with each user or group. You can update the table independently of the workflow configuration,
for example as users join or leave the project. When the workflow runs, the Data Integration Service uses the
current information in the table to assign task instances to users or groups.

You can also specify a range of numeric values or date values when you associate users or groups with the
values in a source data column. When one or more records contain a value in a range that you specify, the
Data Integration Service assigns the task instance to a user or group that you specify.

For more information, see the "Human Task" chapter in the Informatica 10.2 Developer Workflow Guide.

Human Task Notification Properties


Effective in version 10.2, you can edit the subject line of an email notification that you configure in a Human
task. You can also add a workflow variable to the subject line of the notification.

A Human task can send email notifications when the Human task completes in the workflow and when a task
instance that the Human task defines changes status. To configure notifications for a Human task, update
the Notifications properties on the Human task in the workflow. To configure notifications for a task

60 Chapter 2: New Features (10.2)


instance, update the Notification properties on the step within the Human task that defines the task
instances.

When you configure notifications for a Human task instance, you can select an option to notify the task
instance owner in addition to any recipient that you specify. The option applies when a single user owns the
task instance. When you select the option to notify the task instance owner, you can optionally leave the
Recipients field empty

For more information, see the "Human Task" chapter in the Informatica 10.2 Developer Workflow Guide.

Import from PowerCenter


Effective in version 10.2, you can import mappings with multiple pipelines, sessions, workflows, and worklets
from PowerCenter into the Model repository. Sessions within a workflow are imported as Mapping tasks in
the Model repository. Workflows are imported as workflows within the Model repository. Worklets within a
workflow are expanded and objects are imported into the Model repository.

Multiple pipelines within a mapping are imported as separate mappings into the Model repository based on
the target load order. If a workflow contains a session that runs a mapping with multiple pipelines, the import
process creates a separate Model repository mapping and mapping task for each pipeline in the PowerCenter
mapping to preserve the target load order.

For more information about importing from PowerCenter, see the "Import from PowerCenter" chapter in the
Informatica 10.2 Developer Mapping Guide and the "Workflows" chapter in the Informatica 10.2 Developer
Workflow Guide.

Workflows 61
Chapter 3

Changes (10.2)
This chapter includes the following topics:

• Support Changes, 62
• Application Services, 65
• Big Data, 66
• Command Line Programs, 72
• Enterprise Information Catalog, 73
• Informatica Analyst, 73
• Intelligent Streaming, 73
• PowerExchange Adapters, 74
• Security, 76
• Transformations, 76
• Workflows, 77

Support Changes
This section describes the support changes in 10.2.

62
Big Data Hadoop Distribution Support
Informatica big data products support a variety of Hadoop distributions. In each release, Informatica adds,
defers, and drops support for Hadoop distribution versions. Informatica might reinstate support for deferred
versions in a future release.

The following table lists the supported Hadoop distribution versions for Informatica 10.2 big data products:

Product Amazon Azure Cloudera CDH Hortonworks IBM MapR


EMR HDInsight HDP BigInsights

Big Data 5.4 3.6 5.10, 5.11, 2.4, 2.5, 2.6 4.2 5.2 MEP
Management 5.12, 5.13 2.0
5.2 MEP
3.0

Informatica 5.8 NA 5.11, 5.12, 2.6 NA 5.2 MEP


Intelligent 5.13 2.0
Streaming

Enterprise NA 3.6 5.8, 5.9, 5.10, 2.5, 2.6 4.2.x 3.1


Information 5.11
Catalog

Intelligent Data 5.4 3.6 5.11, 5.12 2.6 4.2 5.2 MEP
Lake 2.0

To see a list of the latest supported versions, see the Product Availability Matrix on the Informatica Customer
Portal: [Link]

Big Data Management Hadoop Distributions


The following table lists the supported Hadoop distribution versions and changes in Big Data Management
10.2:

Hadoop Supported 10.2 Changes


Distribution Distribution
Versions

Amazon EMR 5.4 Dropped support for version 5.0.

Azure HDInsight 3.6.x Added support for version 3.6.


Dropped support for version 3.5.

Cloudera CDH 5.10.x Added support for versions 5.12, 5.13


5.11.x Dropped support for version 5.8.
5.12.x Deferred support for version 5.9.
5.13.x

Hortonworks HDP 2.4x Dropped support for version 2.3.


2.5x Note: To use Hortonworks 2.4 or 2.5 with Big Data Management 10.2,
2.6x you must apply Emergency Bug Fix patches. See the following
Knowledge Base articles:
- Hotwonworks 2.4 support: KB 251845.
- Hortonworks 2.5 support: KB 251847.

Support Changes 63
Hadoop Supported 10.2 Changes
Distribution Distribution
Versions

IBM BigInsights 4.2.x No change.

MapR 5.2 MEP 2.0.x Added support for versions 5.2 MEP 2.0 and 5.2 MEP 3.0.
5.2 MEP 3.0.x Dropped support for version 5.2 MEP 1.0.

Informatica big data products support a variety of Hadoop distributions. In each release, Informatica adds,
defers, and drops support for Hadoop distribution versions. Informatica might reinstate support for deferred
versions in a future release.

To see a list of the latest supported versions, see the Product Availability Matrix on the Informatica network:
[Link]

Enterprise Information Catalog Hadoop Distributions


The following table lists the supported Hadoop distribution versions and changes in Enterprise Information
Catalog 10.2:

Hadoop Distribution Supported Distribution Versions Changes since 10.1.1 HotFix1

Azure HDInsight 3.6 Added support for Azure HDInsight.

Cloudera CDH 5.8, 5.9, 5.10, 5.11 No changes.

Hortonworks HDP 2.5.x (Kerberos version), 2.6.x (Non Kerberos Added support for 2.6 non-Kerberos version.
version)

IBM BigInsights 4.2 No change.

Intelligent Data Lake Hadoop Distributions


The following table lists the supported Hadoop distribution versions and changes in Intelligent Data Lake
10.2:

Hadoop Distribution Supported Distribution Versions Changes since 10.1.1 HotFix1

Amazon EMR 5.4 Added support for version 5.4.


Dropped support for version 5.0.

Azure HDInsight 3.6 Added support for version 3.6.


Dropped support for version 3.5.

Cloudera CDH 5.10 Added support for version 5.10 and 5.12.
5.11 Dropped support for version 5.8.
5.12 Deferred support for version 5.9.

Hortonworks HDP 2.6 Dropped support for version 2.3.


Deferred support for versions 2.4 and 2.5.

64 Chapter 3: Changes (10.2)


Hadoop Distribution Supported Distribution Versions Changes since 10.1.1 HotFix1

IBM BigInsights 4.2 No change.

MapR 5.2 MEP 2.0 Added support for MapR.

Intelligent Streaming Hadoop Distributions


The following table lists the supported Hadoop distribution versions and changes in Intelligent Streaming
10.2:

Distribution Supported Versions Changes Since 10.1.1 HotFix1

Amazon EMR 5.4 Added support for 5..8.


5.8

Cloudera CDH 5.10.x Added support for 5.13.


5.11.x Dropped support for versions 5.8.
5.12.x Deferred support for versions 5.9.
5.13.x

Hortonworks HDP 2.5.x Dropped support for versions 2.3.


2.6.x Deferred support for versions 2.4.

MapR 5.2 MEP 2.0 Added support for version 5.2 MEP 2.0.

To see a list of the latest supported versions, see the Product Availability Matrix on the Informatica network:
[Link]

Metadata Manager
Custom Metadata Configurator (Deprecated)
Effective in version 10.2, Informatica deprecated the Custom Metadata Configurator in Metadata Manager.

You can use the load template to load metadata from metadata source files into a custom resource. Create a
load template for the models that use Custom Metadata Configurator templates.

For more information about using load templates, see the "Custom XConnect Created with a Load Template"
in the Informatica Metadata Manager 10.2 Custom Metadata Integration Guide.

Application Services
This section describes changes to Application Services in 10.2.

Application Services 65
Content Management Service
Effective in version 10.2, you do not need to update the search index on the Model repository before you run
the infacmd cms purge command. The infacmd cms purge command updates the search index before it
purges unused tables from the reference data warehouse.

Previously, you updated the search index before you ran the command so that the Model repository held an
up-to-date list of reference tables. The Content Management Service used the list of objects in the index to
select the tables to delete.

For more information, see the "Content Management Service" chapter in the Informatica 10.2 Application
Service Guide.

Data Integration Service


This section describes changes to the Data Integration Service in 10.2.

Execution Options
Effective in version 10.2, you configure the following execution options on the Properties view for the Data
Integration Service:

• Maximum On-Demand Execution Pool Size. Controls the number of on-demand jobs that can run
concurrently. Jobs include data previews, profiling jobs, REST and SQL queries, web service requests, and
mappings run from the Developer tool.
• Maximum Native Batch Execution Pool Size. Controls the number of deployed native jobs that each Data
Integration Service process can run concurrently.
• Maximum Hadoop Batch Execution Pool Size. Controls the number of deployed Hadoop jobs that can run
concurrently.
Previously, you configured the Maximum Execution Pool Size property to control the maximum number of
jobs the Data Integration Service process could run concurrently.

When you upgrade to 10.2, the value of the maximum execution pool size upgrades to the following
properties:

• Maximum On-Demand Batch Execution Pool Size. Inherits the value of the Maximum Execution Pool Size
property.
• Maximum Native Batch Execution Pool Size. Inherits the value of the Maximum Execution Pool Size
property.
• Maximum Hadoop Batch Execution Pool Size. Inherits the value of the Maximum Execution Pool size
property if the original value has been changed from 10. If the value is 10, the Hadoop batch pool retains
the default size of 100.
For more information, see the "Data Integration Service" chapter in the Informatica 10.2 Application Service
Guide.

Big Data
This section describes the changes to big data in 10.2.

66 Chapter 3: Changes (10.2)


Hadoop Connection
Effective in version 10.2, the following changes affect Hadoop connection properties.

You can use the following properties to configure your Hadoop connection:

Property Description

Cluster Configuration The name of the cluster configuration associated with the Hadoop
environment.
Appears in General Properties.

Write Reject Files to Hadoop Select the property to move the reject files to the HDFS location listed
in the property Reject File Directory when you run mappings.
Appears in Reject Directory Properties.

Reject File Directory The directory for Hadoop mapping files on HDFS when you run
mappings.
Appears in Reject Directory Properties

Blaze Job Monitor Address The host name and port number for the Blaze Job Monitor.
Appears in Blaze Configuration.

YARN Queue Name The YARN scheduler queue name used by the Spark engine that
specifies available resources on a cluster.
Appears in Blaze Configuration.

Effective in version 10.2, the following properties are renamed:

Current Name Previous Name Description

ImpersonationUserName HiveUserName Hadoop impersonation user. The user name


that the Data Integration Service impersonates
to run mappings in the Hadoop environment.

Hive Staging Database Name Database Name Namespace for Hive staging tables.
Appears in Common Properties.
Previously appeared in Hive Properties.

HiveWarehouseDirectory HiveWarehouseDirectoryOnHDFS The absolute HDFS file path of the default


database for the warehouse that is local to the
cluster.

Blaze Staging Directory Temporary Working Directory on The HDFS file path of the directory that the
HDFS Blaze engine uses to store temporary files.
CadiWorkingDirectory Appears in Blaze Configuration.

Blaze User Name Blaze Service User Name The owner of the Blaze service and Blaze
CadiUserName service logs.
Appears in Blaze Configuration.

Big Data 67
Current Name Previous Name Description

YARN Queue Name Yarn Queue Name The YARN scheduler queue name used by the
CadiAppYarnQueueName Blaze engine that specifies available resources
on a cluster.
Appears in Blaze Configuration.

BlazeMaxPort CadiMaxPort The maximum value for the port number range
for the Blaze engine.

BlazeMinPort CadiMinPort The minimum value for the port number range
for the Blaze engine.

BlazeExecutionParameterList CadiExecutionParameterList An optional list of configuration parameters to


apply to the Blaze engine.

SparkYarnQueueName YarnQueueName The YARN scheduler queue name used by the


Spark engine that specifies available resources
on a cluster.

Spark Staging Directory Spark HDFS Staging Directory The HDFS file path of the directory that the
Spark engine uses to store temporary files for
running jobs.

Effective in version 10.2, the following properties are removed from the connection and imported into the
cluster configuration:

Property Description

Resource Manager Address The service within Hadoop that submits requests for resources or
spawns YARN applications.
Imported into the cluster configuration as the property
[Link].
Previously appeared in Hadoop Cluster Properties.

Default File System URI The URI to access the default Hadoop Distributed File System.
Imported into the cluster configuration as the property
[Link] or [Link].
Previously appeared in Hadoop Cluster Properties.

Effective in version 10.2, the following properties are deprecated and are removed from the connection:

Property Description

Type The connection type.


Previously appeared in General Properties.

Metastore Execution Mode* Controls whether to connect to a remote metastore or a local


metastore.
Previously appeared in Hive Configuration.

68 Chapter 3: Changes (10.2)


Property Description

Metastore Database URI* The JDBC connection URI used to access the data store in a local
metastore setup.
Previously appeared in Hive Configuration.

Metastore Database Driver* Driver class name for the JDBC data store.
Previously appeared in Hive Configuration.

Metastore Database User Name* The metastore database user name.


Previously appeared in Hive Configuration.

Metastore Database Password* The password for the metastore user name.
Previously appeared in Hive Configuration.

Remote Metastore URI* The metastore URI used to access metadata in a remote metastore
setup.
This property is imported into the cluster configuration as the
property [Link].
Previously appeared in Hive Configuration.

Job Monitoring URL The URL for the MapReduce JobHistory server.
Previously appeared in Hive Configuration.

* These properties are deprecated in 10.2. When you upgrade to 10.2, the property values that you set in a previous
release are saved in the repository, but they do not appear in the connection properties.

HBase Connection Properties


Effective in version 10.2, the following properties are removed from the connection and imported into the
cluster configuration:

Property Description

ZooKeeper Host(s) Name of the machine that hosts the ZooKeeper server.

ZooKeeper Port Port number of the machine that hosts the ZooKeeper server.

Enable Kerberos Connection Enables the Informatica domain to communicate with the HBase
master server or region server that uses Kerberos authentication.

HBase Master Principal Service Principal Name (SPN) of the HBase master server.

HBase Region Server Principal Service Principal Name (SPN) of the HBase region server.

Hive Connection Properties


Effective in version 10.2, PowerExchange for Hive has the following changes:

• You cannot use a PowerExchange for Hive connection if you want the Hive driver to run mappings in the
Hadoop cluster. To use the Hive driver to run mappings in the Hadoop cluster, use a Hadoop connection.

Big Data 69
• The following properties are removed from the connection and imported into the cluster configuration:

Property Description

Default FS URI The URI to access the default Hadoop Distributed File System.

JobTracker/Yarn Resource Manager URI The service within Hadoop that submits the MapReduce tasks to
specific nodes in the cluster.

Hive Warehouse Directory on HDFS The absolute HDFS file path of the default database for the
warehouse that is local to the cluster.

Metastore Execution Mode Controls whether to connect to a remote metastore or a local


metastore.

Metastore Database URI The JDBC connection URI used to access the data store in a local
metastore setup.

Metastore Database Driver Driver class name for the JDBC data store.

Metastore Database User Name The metastore database user name.

Metastore Database Password The password for the metastore user name.

Remote Metastore URI The metastore URI used to access metadata in a remote metastore
setup.
This property is imported into the cluster configuration as the
property [Link].

HBase Connection Properties for MapR-DB


Effective in version 10.2, the Enable Kerberos Connection property is removed from the HBase connection
for MapR-DB and imported into the cluster configuration.

70 Chapter 3: Changes (10.2)


Mapping Run-time Properties
This section lists changes to mapping-run time properties.

Execution Environment
Effective in version 10.2, you can configure the Reject File Directory as a new property in the Hadoop
Execution Environment.

Name Value

Reject File The directory for Hadoop mapping files on HDFS when you run mappings in the Hadoop environment.
Directory The Blaze engine can write reject files to the Hadoop environment for flat file, HDFS, and Hive targets.
The Spark and Hive engines can write reject files to the Hadoop environment for flat file and HDFS
targets.
Choose one of the following options:
- On the Data Integration Service machine. The Data Integration Service stores the reject files based
on the RejectDir system parameter.
- On the Hadoop Cluster. The reject files are moved to the reject directory configured in the Hadoop
connection. If the directory is not configured, the mapping will fail.
- Defer to the Hadoop Connection. The reject files are moved based on whether the reject directory is
enabled in the Hadoop connection properties. If the reject directory is enabled, the reject files are
moved to the reject directory configured in the Hadoop connection. Otherwise, the Data Integration
Service stores the reject files based on the RejectDir system parameter.

Monitoring
Effective in version 10.2, the AllHiveSourceTables row in the Summary Statistics view in the Administrator
tool includes records read from the following sources:

• Original Hive sources in the mapping.


• Staging Hive tables defined by the Hive engine.
• Staging data between two linked MapReduce jobs in each query.
If the LDTM session includes one MapReduce job, the AllHiveSourceTables statistic only includes original
Hive sources in the mapping.

For more information, see the "Monitoring Mappings in the Hadoop Environment" chapter of the Big Data
Management 10.2 User Guide.

S3 Access and Secret Key Properties


Effective in version 10.2, the following properties are included in the list of sensitive properties of a cluster
configuration:

• [Link]
• [Link]
• [Link]
• [Link]
• [Link]
• [Link]
Sensitive properties are included but masked when you generate a cluster configuration archive file to deploy
on the machine that runs the Developer tool.

Big Data 71
Previously, you configured these properties in .xml configuration files on the machines that run the Data
Integration Service and the Developer tool.

For more information about sensitive properties, see the Informatica Big Data Management 10.2 Administrator
Guide.

Sqoop
Effective in version 10.2, if you create a password file to access a database, Sqoop ignores the password file.
Sqoop uses the value that you configure in the Password field of the JDBC connection.

Previously, you could create a password file to access a database.

For more information, see the "Mapping Objects in the Hadoop Environment" chapter in the Informatica Big
Data Management 10.2 User Guide.

Command Line Programs


This section describes changes to commands in 10.2.

infacmd ihs Commands


Obsolete Commands

The following table describes obsolete infacmd ihs commands:

Command Description

BackupData Backs up HDFS data in the internal Hadoop cluster to a zip file. When you back up the data, the
Informatica Cluster Service saves all the data created by Enterprise Information Catalog, such as
HBase data, scanner data, and ingestion data.

removesnapshot Removes existing HDFS snapshots so that you can run the infacmd ihs BackupData command
successfully to back up HDFS data.

infacmd ldm Commands


Changed Commands

The following table describes changed infacmd ldm commands:

Command Change Description

BackupData Effective in 10.2, the name of the


command is changed to
BackupContents.

LocalDestination Effective in 10.2, the -of option is


added to the BackupContents
command.

restoreData Effective in 10.2, the name of the


command is changed to
restoreContents.

72 Chapter 3: Changes (10.2)


For more information, see the "infacmd ldm Command Reference" chapter in the Informatica 10.2 Command
Reference.

Enterprise Information Catalog


This section describes the changes to Informatica Enterprise Information Catalog in 10.2.

Product Name Changes


Effective in version 10.2, Enterprise Information Catalog includes the following name changes:

• The product Informatica Live Data Map is renamed to Informatica Enterprise Information Catalog.
• The Informatica Live Data Map Administrator tool is renamed to Informatica Catalog Administrator.
• The installer is renamed from Live Data Map to Enterprise Information Catalog.

Informatica Analyst
This section describes changes to the Analyst tool in 10.2.

Parameters
This section describes changes to Analyst tool parameters.

System Parameters
Effective in version 10.2, the Analyst tool displays the file path of system parameters in the following format:
$$[Parameter Name]/[Path].

Previously, the Analyst tool displayed the local file path of the data object and did not resolve the system
parameter.

For more information about viewing data objects, see the Informatica 10.2 Analyst Tool Guide.

Intelligent Streaming
This section describes the changes to Informatica Intelligent Streaming in 10.2.

Kafka Data Object Changes


Effective in version 10.2, when you configure the data operation read properties, you can specify the time
from which the Kafka source starts reading Kafka messages from a Kafka topic. You can read from or write
to a Kafka cluster that is configured for Kerberos authentication.

For more information, see the "Sources and Targets in a Streaming Mapping" chapter in the Informatica
Intelligent Streaming 10.2 User Guide.

Enterprise Information Catalog 73


PowerExchange Adapters
This section describes changes to PowerExchange adapters in version 10.2.

PowerExchange Adapters for Informatica


This section describes changes to Informatica adapters in 10.2.

PowerExchange for Amazon S3


Effective in version 10.2, PowerExchange for Amazon S3 has the following changes:

• You can provide the folder path without specifying the bucket name in the advanced properties for read
and write operation in the following format: /<folder_name>. The Data Integration Service appends this
folder path with the folder path that you specify in the connection properties.
Previously, you specified the bucket name along with the folder path in the advanced properties for read
and write operation in the following format: <bucket_name>/<folder_name>.
• You can view the bucket name directory following sub directory list in the left panel and selected list of
files in the right panel of metadata import browser.
Previously, PowerExchange for Amazon S3 displayed the list of bucket names in the left panel and folder
path along with file names in right panel of metadata import browser.
• PowerExchange for Amazon S3 creates the data object read operation and data object write operation for
the Amazon S3 data object automatically.
Previously, you had to create the data object read operation and data object write operation for the
Amazon S3 data object manually.

For more information, see the Informatica PowerExchange for Amazon S3 10.2 User Guide

PowerExchange Adapters for PowerCenter


This section describes changes to PowerCenter adapters in version 10.2.

PowerExchange for Amazon Redshift


Effective in version 10.2, you must provide the schema name for the Amazon Redshift table to run mappings
successfully.

Previously, mappings would run even if the public schema was selected.

For more information, see the Informatica PowerExchange for Amazon Redshift 10.2 User Guide for
PowerCenter.

PowerExchange for Email Server


Effective in version 10.2, PowerExchange for Email Server installs with the Informatica services.

Previously, PowerExchange for Email Server had a separate installer.

For more information, see the Informatica PowerExchange for Email Server 10.2 User Guide for PowerCenter.

PowerExchange for JD Edwards EnterpriseOne


Effective in version 10.2, PowerExchange for JD Edwards EnterpriseOne installs with the Informatica
services.

Previously, PowerExchange for JD Edwards EnterpriseOne had a separate installer.

For more information, see the Informatica PowerExchange for JD Edwards EnterpriseOne 10.2 User Guide for
PowerCenter.

74 Chapter 3: Changes (10.2)


PowerExchange for JD Edwards World
Effective in version 10.2, PowerExchange for JD Edwards World installs with the Informatica services.

Previously, PowerExchange for JD Edwards World had a separate installer.

For more information, see the Informatica PowerExchange for JD Edwards World 10.2 User Guide for
PowerCenter.

PowerExchange for LDAP


Effective in version 10.2, PowerExchange for LDAP installs with the Informatica services.

Previously, PowerExchange for LDAP had a separate installer.

For more information, see the Informatica PowerExchange for LDAP 10.2 User Guide for PowerCenter.

PowerExchange for Lotus Notes


Effective in version 10.2, PowerExchange for Lotus Notes installs with the Informatica services.

Previously, PowerExchange for Lotus Notes had a separate installer.

For more information, see the Informatica PowerExchange for Lotus Notes 10.2 User Guide for PowerCenter.

PowerExchange for Oracle E-Business Suite


Effective in version 10.2, PowerExchange for Oracle E-Business Suite installs with the Informatica services.

Previously, PowerExchange for Oracle E-Business Suite had a separate installer.

For more information, see the Informatica PowerExchange for Oracle E-Business Suite 10.2 User Guide for
PowerCenter.

PowerExchange for SAP NetWeaver


Effective in version 10.2, Informatica does not package secure transports in a separate folder named Secure
within the Informatica installer .zip file. Informatica packages both standard and secure transports in the
following folders:

• Unicode cofiles: Informatica installer zip file/saptrans/mySAP/UC/cofiles


• Unicode data files: Informatica installer zip file/saptrans/mySAP/UC/data
• Non-Unicode cofiles: Informatica installer zip file/saptrans/mySAP/NUC/cofiles
• Non-Unicode data files: Informatica installer zip file/saptrans/mySAP/NUC/data
Previously, Informatica packaged the secure transports in the following folders:

• Unicode cofiles: Informatica installer zip file/saptrans/mySAP/UC/Secure/cofiles


• Unicode data files: Informatica installer zip file/saptrans/mySAP/UC/Secure/data
• Non-Unicode cofiles: Informatica installer zip file/saptrans/mySAP/NUC/Secure/cofiles
• Non-Unicode data files: Informatica installer zip file/saptrans/mySAP/NUC/Secure/data
For more information, see the Informatica PowerExchange for SAP NetWeaver 10.2 User Guide for
PowerCenter.

PowerExchange for Siebel


Effective in version 10.2, PowerExchange for Siebel installs with the Informatica services.

Previously, PowerExchange for Siebel had a separate installer.

For more information, see the Informatica PowerExchange for Siebel 10.2 User Guide for PowerCenter.

PowerExchange Adapters 75
Security
This section describes changes to security features in 10.2.

SAML Authentication
Effective in version 10.2, you must configure Security Assertion Markup Language (SAML) authentication at
the domain level, and on all gateway nodes within the domain.

Previously, you had to configure SAML authentication at the domain level only.

For more information, see the "SAML Authentication for Informatica Web Applications" chapter in the
Informatica 10.2 Security Guide.

Transformations
This section describes changed transformation behavior in 10.2.

Informatica Transformations
This section describes the changes to the Informatica transformations in 10.2.

Address Validator Transformation


This section describes the changes to the Address Validator transformation.

The Address Validator transformation contains the following updates to address functionality:

All Countries
Effective in version 10.2, the Address Validator transformation uses version 5.11.0 of the Informatica
Address Verification software engine. The engine enables the features that Informatica adds to the Address
Validator transformation in version 10.2.

Previously, the transformation used version 5.9.0 of the Informatica Address Verification software engine.

Japan
Effective in version 10.2, you can configure a single mapping to return the Choumei Aza code for a current
address in Japan. To return the code, select the Current Choumei Aza Code JP port. You can use the code to
find the current version of any legacy address that Japan Post recognizes.

Previously, you used the New Choumei Aza Code JP port to return incremental changes to the Choumei Aza
code for an address. The transformation did not include the Current Choumei Aza Code JP port. You needed
to configure two or more mappings to verify a current Choumei Aza code and the corresponding address.

United Kingdom
Effective in version 10.2, you can configure the Address Validator transformation to return postal,
administrative, and traditional county information from the Royal Mail Postcode Address File. The
transformation returns the information on the Province ports.

Previously, the transformation returned postal county information when the information was postally
relevant.

76 Chapter 3: Changes (10.2)


The following table shows the ports that you can select for each information type:

County Information Type Address Element

Postal Province 1

Administrative Province 2

Traditional Province 3

Updated Certification Standards in Multiple Countries


Effective in version 10.2, Informatica supports the following certification standards for address verification
software:

• Address Matching Approval System (AMAS) from Australia Post. Updated to Cycle 2017.
• SendRight certification from New Zealand Post. Updated to Cycle 2017.
• Software Evaluation and Recognition Program (SERP) from Canada Post. Updated to Cycle 2017.
Informatica continues to support the current versions of the Coding Accuracy Support System (CASS)
standards from the United States Postal Service and the Service National de L'Adresse (SNA) standard from
La Poste of France.

For more information, see the Informatica 10.2 Developer Transformation Guide and the Informatica 10.2
Address Validator Port Reference.

For comprehensive information about the updates to the Informatica Address Verification software engine
from version 5.9.0 through version 5.11.0, see the Informatica Address Verification 5.11.0 Release Guide.

Expression Transformation
Effective in version 10.2, you can configure the Expression transformation to be an active transformation on
the Spark engine by using a window function or an aggregate function with windowing properties.

Previously, the Expression transformation could only be a passive transformation.

For more information, see the Big Data Management 10.2 Administrator Guide.

Workflows
This section describes changed workflow behavior in version 10.2.

Informatica Workflows
This section describes the changes to Informatica workflow behavior in 10.2.

Workflow Variables in Task Instance Notifications


Effective in version 10.2, the workflow variable $[Link] changes name to $[Link].
The usage of the variable does not change in version 10.2.

For more information, see the "Human Task" chapter in the Informatica 10.2 Developer Workflow Guide.

Workflows 77
Chapter 4

Release Tasks (10.2)


This chapter includes the following topic:

• PowerExchange Adapters, 78

PowerExchange Adapters
This section describes release tasks for PowerExchange adapters in version 10.2.

PowerExchange Adapters for PowerCenter


This section describes release tasks for PowerCenter adapters in version 10.2.

PowerExchange for Amazon Redshift


Effective in version 10.2, for existing mappings where public schema is selected, ensure that the schema
name is correct and works for the Redshift table. The public schema might not work for all the tables.

For more information, see the Informatica 10.2 PowerExchange for Amazon Redshift User Guide for
PowerCenter

PowerExchange for Amazon S3


Effective in version 10.2, when you upgrade from 9.5.1 or 9.6.1, the upgrade process does not retain all
property values in the connection. After you upgrade, you must reconfigure the following properties:

Property Description

Access Key The access key ID used to access the Amazon account resources. Required if you do not use AWS
Identity and Access Management (IAM) authentication.
Note: Ensure that you have valid AWS credentials before you create a connection.

Secret Key The secret access key used to access the Amazon account resources. This value is associated with
the access key and uniquely identifies the account. You must specify this value if you specify the
access key ID. Required if you do not use AWS Identity and Access Management (IAM)
authentication.

Master Optional. Provide a 256-bit AES encryption key in the Base64 format when you enable client-side
Symmetric Key encryption. You can generate a key using a third-party tool.
If you specify a value, ensure that you specify the encryption type as client side encryption in the
target session properties.

78
For more information, see the Informatica 10.2 PowerExchange for Amazon S3 User Guide for PowerCenter

PowerExchange for Microsoft Dynamics CRM


When you upgrade from an earlier version, you must copy the .jar files in the installation location of 10.2.

• For the client, if you upgrade from 9.x to 10.2, copy the local_policy.jar, US_export_policy.jar, and
cacerts files from the following 9.x installation folder <Informatica installation directory>\clients
\java\jre\lib\security to the following 10.2 installation folder <Informatica installation
directory>\clients\java\32bit\jre\lib\security.
If you upgrade from 10.x to 10.2, copy the local_policy.jar, US_export_policy.jar, and cacerts files
from the following 10.x installation folder <Informatica installation directory>\clients\java
\32bit\jre\lib\security to the corresponding 10.2 folder.
• For the server, copy the local_policy.jar, US_export_policy.jar, and cacerts files from the
<Informatica installation directory>java/jre/lib/security folder of the previous release to the
corresponding 10.2 folder.
When you upgrade from an earlier version, you must copy the msdcrm folder in the installation location of
10.2.

• For the client, copy the msdcrm folder from the <Informatica installation directory>\clients
\PowerCenterClient\client\bin\javalib folder of the previous release to the corresponding 10.2
folder.
• For the server, copy the msdcrm folder from the <Informatica installation directory>/server/bin/
javalib folder of the previous release to the corresponding 10.2 folder.

PowerExchange for SAP NetWeaver


Effective in version 10.2, Informatica implemented the following changes in PowerExchange for SAP
NetWeaver support for PowerCenter:

Dropped Support for the CPI-C Protocol

Effective in version 10.2, Informatica dropped support for the CPI-C protocol.

Use the RFC or HTTP protocol to generate and install ABAP programs while reading data from SAP
tables.

If you upgrade ABAP mappings that were generated with the CPI-C protocol, you must complete the
following tasks:

1. Regenerate and reinstall the ABAP program by using stream (RFC/HTTP) mode.
2. Create a System user or a communication user with the appropriate authorization profile to enable
dialog-free communication between SAP and Informatica.

For more information, see the Informatica PowerExchange for SAP NetWeaver 10.2 User Guide for
PowerCenter.

Dropped Support for ABAP Table Reader Standard Transports

Effective in version 10.2, Informatica dropped support for the ABAP table reader standard transports.
Informatica will not ship the standard transports for ABAP table reader. Informatica will ship only secure
transports for ABAP table reader.

If you upgrade from an earlier version, you must delete the standard transports and install the secure
transports.

For more information, see the Informatica PowerExchange for SAP NetWeaver 10.2 Transport Versions
Installation Notice.

PowerExchange Adapters 79
Added Support for HTTP Streaming for ABAP Table Reader Mappings

Effective in version 10.2, when you run ABAP mappings to read data from SAP tables, you can configure
HTTP streaming.

To use HTTP stream mode for upgraded ABAP mappings, perform the following tasks:

1. Regenerate and reinstall the ABAP program in stream mode.


2. Create an SAP ABAP HTTP streaming connection.
3. Configure the session to use the SAP streaming reader, an SAP ABAP HTTP streaming connection,
and an SAP R/3 application connection.

Note: If you configure HTTP streaming, but do not regenerate and reinstall the ABAP program in stream
mode, the session fails.

80 Chapter 4: Release Tasks (10.2)


Part II: Version 10.1.1
This part contains the following chapters:

• New Features, Changes, and Release Tasks (10.1.1 HotFix 1), 82


• New Features, Changes, and Release Tasks (10.1.1 Update 2), 87
• New Features, Changes, and Release Tasks (10.1.1 Update 1), 94
• New Products (10.1.1), 96
• New Features (10.1.1), 98
• Changes (10.1.1), 120
• Release Tasks (10.1.1), 131

81
Chapter 5

New Features, Changes, and


Release Tasks (10.1.1 HotFix 1)
This chapter includes the following topics:

• New Products (10.1.1 HotFix 1), 82


• New Features (10.1.1 HotFix 1), 82
• Changes (10.1.1 HotFix 1), 86

New Products (10.1.1 HotFix 1)


This section describes new products in version 10.1.1 HotFix 1.

PowerExchange for Cloud Applications


Effective in version 10.1.1 HotFix 1, you can use PowerExchange for Cloud Applications to connect to
Informatica Cloud from PowerCenter. You can read data from or write data to data sources for which
connections are available in Informatica Cloud. It is not required to have the PowerExchange for the
respective cloud application in PowerCenter.

For more information, see the Informatica PowerExchange for Cloud Applications 10.1.1 HotFix 1 User Guide.

New Features (10.1.1 HotFix 1)


This section describes new features in version 10.1.1 HotFix 1.

Command Line Programs


This section describes new commands in version 10.1.1 HotFix 1.

82
infacmd dis Commands (10.1.1 HF1)
The following table describes new infacmd dis commands:

Command Description

disableMappingValidationEnvironment Disables the mapping validation environment for mappings that are deployed
to the Data Integration Service.

enableMappingValidationEnvironment Enables a mapping validation environment for mappings that are deployed to
the Data Integration Service.

setMappingExecutionEnvironment Specifies the mapping execution environment for mappings that are
deployed to the Data Integration Service.

For more information, see the "Infacmd dis Command Reference" chapter in the Informatica 10.1.1 HotFix1
Command Reference.

infacmd mrs Commands (10.1.1 HF1)


The following table describes new infacmd mrs commands:

Command Description

disableMappingValidationEnvironment Disables the mapping validation environment for mappings that you run from
the Developer tool.

enableMappingValidationEnvironment Enables a mapping validation environment for mappings that you run from
the Developer tool.

setMappingExecutionEnvironment Specifies the mapping execution environment for mappings that you run
from the Developer tool.

For more information, see the "Infacmd mrs Command Reference" chapter in the Informatica 10.1.1 HotFix1
Command Reference.

infacmd ps Command
The following table describes a new infacmd ps command:

Command Description

restoreProfilesAndScorecards Restores profiles and scorecards from a previous version to version 10.1.1 HotFix 1.

For more information, see the "infacmd ps Command Reference" chapter in the Informatica 10.1.1 HotFix 1
Command Reference.

Informatica Analyst
This section describes new Analyst tool features in version 10.1.1 HotFix 1.

New Features (10.1.1 HotFix 1) 83


Profiles and Scorecards
This section describes new Analyst tool features for profiles and scorecards.

Invalid Rows Worksheet


Effective in version 10.1.1 HotFix1, scorecard export results include invalid source rows after you choose the
Data > All option in the Export data to a file dialog box.

For more information about scorecards, see the "Scorecards in Informatica Analyst" chapter in the
Informatica 10.1.1 HotFix1 Data Discovery Guide.

PowerCenter
This section describes new PowerCenter features in version 10.1.1 HotFix 1.

Pushdown Optimization for Greenplum


Effective in version 10.1.1 HotFix 1, when the connection type is ODBC, the PowerCenter Integration Service
can push TRUNC(DATE), CONCAT(), and TO_CHAR(DATE) functions to Greenplum using source-side and full
pushdown optimization.

For more information, see the Informatica PowerCenter 10.1.1 HotFix 1 Advanced Workflow Guide.

Pushdown Optimization for Microsoft Azure SQL Data Warehouse


Effective in version 10.1.1 HotFix 1, when the connection type is ODBC, you can configure source-side or full
pushdown optimization to push the transformation logic to Microsoft Azure SQL Data Warehouse.

For more information, see the Informatica PowerCenter 10.1.1 HotFix 1 Advanced Workflow Guide.

PowerExchange Adapters
This section describes new PowerExchange adapter features in version 10.1.1 HotFix 1.

PowerExchange Adapters for PowerCenter®


This section describes new PowerCenter adapter features in version 10.1.1 HotFix 1.

PowerExchange for Amazon Redshift


This section describes new PowerExchange for Amazon Redshift features in version 10.1.1 HotFix 1:

• You can read data from or write data to the following regions:
- Asia Pacific (Mumbai)

- Canada (Central)

- US East (Ohio)
• PowerExchange for Amazon Redshift supports the asterisk pushdown operator (*) that can be pushed to
the Amazon Redshift database by using source-side, target-side, or full pushdown optimization.
• For client-side and server-side encryption, you can configure the customer master key ID generated by
AWS Key Management Service (AWS KMS) in the connection.
For more information, see the Informatica 10.1.1 HotFix 1 PowerExchange for Amazon Redshift User Guide for
PowerCenter.

84 Chapter 5: New Features, Changes, and Release Tasks (10.1.1 HotFix 1)


PowerExchange for Amazon S3
This section describes new PowerExchange for Amazon S3 features in version 10.1.1 HotFix 1:

• You can read data from or write data to the following regions:
- Asia Pacific (Mumbai)

- Canada (Central)

- US East (Ohio)
• For client-side and server-side encryption, you can configure the customer master key ID generated by
AWS Key Management Service (AWS KMS) in the connection.
• When you write data to the Amazon S3 buckets, you can compress the data in GZIP format.
• You can override the Amazon S3 folder path when you run a mapping.
For more information, see the Informatica PowerExchange for Amazon S3 10.1.1 HotFix 1 User Guide for
PowerCenter.

PowerExchange for Microsoft Azure Blob Storage


Effective in version 10.1.1 HotFix 1, you can use append blob type target session property to write data to
Microsoft Azure Blob Storage.

For more information, see the Informatica PowerExchange for Microsoft Azure Blob Storage 10.1.1 HotFix 1
User Guide.

PowerExchange for Microsoft Azure SQL Data Warehouse


Effective in version 10.1.1 HotFix 1, you can use the following target session properties with PowerExchange
for Microsoft Azure SQL Data Warehouse:

• Update as Update. The PowerCenter Integration Service updates all rows as updates.
• Update else Insert. The PowerCenter Integration Service updates existing rows and inserts other rows as
if marked for insert.
• Delete. The PowerCenter Integration Service deletes the specified records from Microsoft Azure SQL Data
Warehouse.
For more information, see the Informatica PowerExchange for Microsoft Azure SQL Data Warehouse 10.1.1
HotFix 1 User Guide for PowerCenter.

PowerExchange for Microsoft Dynamics CRM


Effective in version 10.1.1 HotFix 1, you can use the following target session properties with PowerExchange
for Microsoft Dynamics CRM:

• Add row reject reason. Select to include the reason for rejection of rows to the reject file.
• Alternate Key Name. Indicates whether the column is an alternate key for an entity. Specify the name of
the alternate key. You can use alternate key in update and upsert operations.
For more information, see the Informatica PowerExchange for Microsoft Dynamics CRM 10.1.1 HotFix 1 User
Guide for PowerCenter.

PowerExchange for SAP NetWeaver


Effective in version 10.1.1 HotFix 1, PowerExchange for SAP NetWeaver supports the SSTRING data type
when you read data from SAP tables through ABAP. The SSTRING data type is represented as SSTR in
PowerCenter.

For more information, see the Informatica PowerExchange for SAP NetWeaver 10.1.1 HotFix 1 User Guide.

New Features (10.1.1 HotFix 1) 85


Changes (10.1.1 HotFix 1)
This section describes changes in version 10.1.1 HotFix 1.

Support Changes
Effective in version 10.1.1 HF1, the following changes apply to Informatica support for third-party platforms
and systems:

Big Data Management Hadoop Distributions


The following table lists the supported Hadoop distribution versions and changes in 10.1.1 HotFix 1:

Distribution Supported Versions 10.1.1 HotFix 1 Changes

Amazon EMR 5.4 To enable support for Amazon EMR 5.4, apply EBF-9585 to Big Data
Management 10.1.1 Hot Fix 1.
Big Data Management version 10.1.1 Update 2 supports Amazon EMR
5.0.

Azure HDInsight 3.5 Added support for version 3.5.

Cloudera CDH 5.8, 5.9, 5.10, 5.11 Added support for versions 5.10, 5.11.

Hortonworks HDP 2.3, 2.4, 2.5, 2.6 Added support for version 2.6.

IBM BigInsights 4.2 No change.

MapR 5.2.0 MEP binary v. 1.0 No change.

To see a list of the latest supported versions, see the Product Availability Matrix on the Informatica Customer
Portal: [Link]

86 Chapter 5: New Features, Changes, and Release Tasks (10.1.1 HotFix 1)


Chapter 6

New Features, Changes, and


Release Tasks (10.1.1 Update 2)
This chapter includes the following topics:

• New Products (10.1.1 Update 2), 87


• New Features (10.1.1 Update 2), 87
• Changes (10.1.1 Update 2), 90

New Products (10.1.1 Update 2)


This section describes new products in version 10.1.1 Update 2.

PowerExchange for MapR-DB


Effective in version 10.1.1 Update 2, you can use PowerExchange for MapR-DB to read data from and write
data to MapR-DB binary tables.

PowerExchange for MapR-DB uses the HBase API to connect to MapR-DB. To connect to a MapR-DB table,
you must create an HBase connection in which you must specify the database type as MapR-DB. You must
create an HBase data object read or write operation, and add it to a mapping to read or write data.

You can validate and run mappings in the native environment or on the Blaze engine in the Hadoop
environment.

For more information, see the Informatica PowerExchange for MapR-DB 10.1.1 Update 2 User Guide.

New Features (10.1.1 Update 2)


This section describes new features in version 10.1.1 Update 2.

87
Big Data Management
This section describes new big data features in version 10.1.1 Update 2.

Truncate Hive table partitions on mappings that use the Blaze run-time engine

Effective in version 10.1.1 Update 2, you can truncate Hive table partitions on mappings that use the
Blaze run-time engine.

For more information about truncating partitions in a Hive target, see the Informatica 10.1.1 Update 2 Big
Data Management User Guide.

Filters for partitioned columns on the Blaze engine

Effective in version 10.1.1 Update 2, the Blaze engine can push filters on partitioned columns down to
the Hive source to increase performance.

When a mapping contains a Filter transformation on a partitioned column of a Hive source, the Blaze
engine reads only the partitions with data that satisfies the filter condition. To enable the Blaze engine to
read specific partitions, the Filter transformation must be the next transformation after the source in the
mapping.

For more information, see the Informatica 10.1.1 Update 2 Big Data Management User Guide.

OraOop support on the Spark engine

Effective in version 10.1.1 Update 2, you can configure OraOop to run Sqoop mappings on the Spark
engine. When you read data from or write data to Oracle, you can configure the direct argument to
enable Sqoop to use OraOop.

OraOop is a specialized Sqoop plug-in for Oracle that uses native protocols to connect to the Oracle
database. When you configure OraOop, the performance improves.

For more information, see the Informatica 10.1.1 Update 2 Big Data Management User Guide.

Sqoop support for native Teradata mappings on Cloudera clusters

Effective in version 10.1.1 Update 2, if you use a Teradata PT connection to run a mapping on a Cloudera
cluster and on the Blaze engine, the Data Integration Service invokes the Cloudera Connector Powered
by Teradata at run time. The Data Integration Service then runs the mapping through Sqoop.

For more information, see the Informatica 10.1.1 Update 2 PowerExchange for Teradata Parallel
Transporter API User Guide.

Scheduler support on Blaze and Spark engines

Effective in version 10.1.1 Update 2, the following schedulers are valid for Hadoop distributions on both
Blaze and Spark engines:

• Fair Scheduler. Assigns resources to jobs such that all jobs receive, on average, an equal share of
resources over time.
• Capacity Scheduler. Designed to run Hadoop applications as a shared, multi-tenant cluster. You can
configure Capacity Scheduler with or without node labeling. Node label is a way to group nodes with
similar characteristics.

For more information, see the Mappings in the Hadoop Environment chapter of the Informatica 10.1.1
Update 2 Big Data Management User Guide.

Support for YARN queues on Blaze and Spark engines

Effective in version 10.1.1 Update 2, you can direct Blaze and Spark jobs to a specific YARN scheduler
queue. Queues allow multiple tenants to share the cluster. As you submit applications to YARN, the
scheduler assigns them to a queue. You configure the YARN queue in the Hadoop connection properties.

88 Chapter 6: New Features, Changes, and Release Tasks (10.1.1 Update 2)


For more information, see the Mappings in the Hadoop Environment chapter of the Informatica 10.1.1
Update 2 Big Data Management User Guide.

Hadoop security features on IBM BigInsights 4.2

Effective in version 10.1.1 Update 2, you can use the following Hadoop security features on the IBM
BigInsights 4.2 Hadoop distribution:

• Apache Knox
• Apache Ranger
• HDFS Transparent Encryption

For more information, see the Informatica 10.1.1 Update 2 Big Data Management Security Guide.

SSL/TLS security modes

Effective in version 10.1.1 Update 2, you can use the SSL and TLS security modes on the Cloudera and
HortonWorks Hadoop distributions, including the following security methods and plugins:

• Kerberos authentication
• Apache Ranger
• Apache Sentry
• Name node high availability
• Resource Manager high availability
For more information, see the Informatica 10.1.1 Update 2 Big Data Management Installation and
Configuration Guide.

Hive sources and targets on Amazon S3

Effective in version 10.1.1 Update 2, Big Data Management supports reading and writing to Hive on
Amazon S3 buckets for clusters configured with the following Hadoop distributions:

• Amazon EMR
• Cloudera
• HortonWorks
• MapR
• BigInsights

For more information, see the Informatica 10.1.1 Update 2 Big Data Management User Guide.

Enterprise Information Catalog


This section describes new features in Enterprise Information Catalog version 10.1.1 Update 2.

File System resource

Effective in version 10.1.1 Update 2, you can create a File System resource to import metadata from files
in Windows and Linux file systems.

For more information, see the Informatica 10.1.1 Update 2 Live Data Map Administrator Guide.

Apache Ranger-enabled clusters

Effective in version 10.1.1 Update 2, you can deploy Enterprise Information Catalog on Apache Ranger-
enabled clusters. Apache Ranger provides a security framework to manage the security of the clusters.

New Features (10.1.1 Update 2) 89


Enhanced SSH support for deploying Informatica Cluster Service

Effective in version 10.1.1 Update 2, you can deploy Informatica Cluster Service on hosts where Centrify
is enabled. Centrify integrates with an existing Active Directory infrastructure to manage user
authentication on remote Linux hosts.

Intelligent Data Lake


This section describes new Intelligent Data Lake features in version 10.1.1 Update 2.

Hadoop ecosystem

Effective in version 10.1.1 Update 2, you can use following Hadoop distributions as a Hadoop data lake:

• Cloudera CDH 5.9


• Hortonworks HDP 2.3, 2.4, and 2.5
• Azure HDInsight 3.5
• Amazon EMR 5.0
• IBM BigInsights 4.2

Using MariaDB for the Data Preparation Service

Effective in version 10.1.1 Update 2, you can use MariaDB 10.0.28 for the Data Preparation Service
repository.

Viewing column-level lineage

Effective in version 10.1.1 Update 2, data analysts can view lineage of individual columns in a table
corresponding to activities such as data asset copy, import, export, publication, and upload.

SSL/TLS support

Effective in version 10.1.1 Update 2, you can integrate Intelligent Data Lake with Cloudera 5.9 clusters
that are SSL/TLS enabled.

PowerExchange Adapters for Informatica


This section describes new Informatica adapter features in version 10.1.1 Update 2.

PowerExchange for Amazon Redshift


Effective in version 10.1.1 Update 2, you can select multiple schemas for Amazon Redshift objects.

For more information, see the Informatica 10.1.1 Update 2 PowerExchange for Amazon Redshift User Guide.

Changes (10.1.1 Update 2)


This section describes changes in version 10.1.1 Update 2.

90 Chapter 6: New Features, Changes, and Release Tasks (10.1.1 Update 2)


Support Changes
This section describes the support changes in version 10.1.1 Update 2.

Distribution support changes for Big Data Management

The following table lists the supported Hadoop distribution versions and changes in 10.1.1 Update 2:

Distribution Supported Versions 10.1.1 Update 2 Changes

Amazon EMR 5.0.0 No change.

Azure HDInsight 3.5 * Added support for version 3.5


Dropped support for version 3.4.

Cloudera CDH 5.8, 5.9, 5.10 * Added support for version 5.10.

Hortonworks HDP 2.3, 2.4, 2.5 Added support for versions 2.3 and 2.4.

IBM BigInsights 4.2 No change.

MapR 5.2 Reinstated support.


Added support for version 5.2.
Dropped support for version 5.1.

*Azure HDInsight 3.5 and Cloudera CDH 5.10 are available for technical preview. Technical preview functionality is
supported but is not production-ready. Informatica recommends that you use in non-production environments only.

For a complete list of Hadoop support, see the Product Availability Matrix on Informatica Network:
[Link]

Dropped support for Teradata Connector for Hadoop (TDCH) and Teradata PT objects on the Blaze engine

Effective in version 10.1.1 Update 2, Informatica dropped support for Teradata Connector for Hadoop
(TDCH) on the Blaze engine. The configuration for Sqoop connectivity in 10.1.1 Update 2 depends on the
Hadoop distribution:
IBM BigInsights and MapR

You can configure Sqoop connectivity through the JDBC connection. For information about
configuring Sqoop connectivity through JDBC connections, see the Informatica 10.1.1 Update 2 Big
Data Management User Guide.

Cloudera CDH

You can configure Sqoop connectivity through the Teradata PT connection and the Cloudera
Connector Powered by Teradata.

1. Download the Cloudera Connector Powered by Teradata .jar files and copy them to the node
where the Data Integration Service runs. For more information, see the Informatica 10.1.1
Update 2 PowerExchange for Teradata Parallel Transporter API User Guide.
2. Move the configuration parameters that you defined in the [Link] file to the
Additional Sqoop Arguments field in the Teradata PT connection. See the Cloudera Connector
Powered by Teradata documentation for a list of arguments that you can specify.

Changes (10.1.1 Update 2) 91


Hortonworks HDP

You can configure Sqoop connectivity through the Teradata PT connection and the Hortonworks
Connector for Teradata.

1. Download the Hortonworks Connector for Teradata .jar files and copy them to the node where
the Data Integration Service runs. For more information, see the Informatica 10.1.1 Update 2
PowerExchange for Teradata Parallel Transporter API User Guide.
2. Move the configuration parameters that you defined in the [Link] file to the
Additional Sqoop Arguments field in the Teradata PT connection. See the Hortonworks
Connector for Teradata documentation for a list of arguments that you can specify.

Note: You can continue to use TDCH on the Hive engine through Teradata PT connections.

Deprecated support of Sqoop connectivity through Teradata PT data objects and Teradata PT connections

Effective in version 10.1.1 Update 2, Informatica deprecated Sqoop connectivity through Teradata PT
data objects and Teradata PT connections for Cloudera CDH and Hortonworks. Support will be dropped
in a future release.

To read data from or write data to Teradata by using TDCH and Sqoop, Informatica recommends that
you configure Sqoop connectivity through JDBC connections and relational data objects.

Big Data Management


This section describes the changes to big data in version 10.1.1 Update 2.

Sqoop
Effective in version 10.1.1 Update 2, you can no longer override the user name and password in a Sqoop
mapping by using the --username and --password arguments. Sqoop uses the values that you configure in the
User Name and Password fields of the JDBC connection.

For more information, see the Informatica 10.1.1 Update 2 Big Data Management User Guide.

Enterprise Information Catalog


This section describes the changes to the Enterprise Information Catalog in version 10.1.1 Update 2.

Asset path

Effective in version 10.1.1 Update 2, you can view the path to the asset in the Asset Details view along
with other general information about the asset.

For more information, see the Informatica 10.1.1 Update 2 Enterprise Information Catalog User Guide.

Business terms in the Profile Results section

Effective in version 10.1.1 Update 2, the profile results section for tabular assets also includes business
terms. Previously, the profile results section included column names, data types, and data domains.

For more information, see the Informatica 10.1.1 Update 2 Enterprise Information Catalog User Guide.

URLs as attribute values

Effective in version 10.1.1 Update 2, if you had configured a custom attribute to allow you to enter URLs
as the attribute value, you can assign multiple URLs as attribute values to a technical asset.

For more information, see the Informatica 10.1.1 Update 2 Enterprise Information Catalog User Guide.

92 Chapter 6: New Features, Changes, and Release Tasks (10.1.1 Update 2)


Detection of CSV file headers

Effective in version 10.1.1 Update 2, you can configure the following resources to automatically detect
headers for CSV files from which you extract metadata:

• Amazon S3
• HDFS
• File System

For more information, see the Informatica 10.1.1 Update 2 Live Data Map Administrator Guide.

Amazon Redshift resource

Effective in version 10.1.1 Update 2, you can import multiple schemas for an Amazon Redshift resource.

For more information, see the Informatica 10.1.1 Update 2 Live Data Map Administrator Guide.

Profiling for Hive resource on Data Integration Service

Effective in version 10.1.1 Update 2, you can run Hive resources on Data Integration Service for profiling.

For more information, see the Informatica 10.1.1 Update 2 Live Data Map Administrator Guide.

PowerExchange Adapters for Informatica


This section describes changes to Informatica adapters in version 10.1.1 Update 2.

PowerExchange for Amazon Redshift


Effective in version 10.1.1 Update 2, you can select multiple schemas for Amazon Redshift objects. To select
multiple schemas, leave the Schema field blank in the connection properties. In earlier releases, selecting
schema was mandatory and you could select only one schema.

If you upgrade to version 10.1.1 Update 2, the PowerExchange for Redshift mappings created in earlier
versions must have the relevant schema name in the connection property. Else, mappings fail when you run
them on version 10.1.1 Update 2.

For more information, see the Informatica 10.1.1 Update 2 PowerExchange for Amazon Redshift User Guide.

Changes (10.1.1 Update 2) 93


Chapter 7

New Features, Changes, and


Release Tasks (10.1.1 Update 1)
This chapter includes the following topics:

• New Features (10.1.1 Update 1), 94


• Changes (10.1.1 Update 1), 94
• Release Tasks (10.1.1 Update 1), 95

New Features (10.1.1 Update 1)


This section describes new features in version 10.1.1 Update 1.

Big Data Management


This section describes new big data features in version 10.1.1 Update 1.

Sqoop Support for Native Teradata Mappings


Effective in version 10.1.1 Update 1, if you use a Teradata PT connection to run a mapping on a Hortonworks
cluster and on the Blaze engine, the Data Integration Service invokes the Hortonworks Connector for
Teradata at run time. The Data Integration Service then runs the mapping through Sqoop.

For more information, see the Informatica 10.1.1 Update 1 PowerExchange for Teradata Parallel Transporter
API User Guide.

SQL Override Support for Native Teradata Mappings


Effective in version 10.1.1 Update 1, if you use a Teradata PT connection to run a mapping on a Hortonworks
cluster and on the Blaze engine, you can configure an SQL override query. You can also parameterize the SQL
override query.

For more information, see the Informatica 10.1.1 Update 1 PowerExchange for Teradata Parallel Transporter
API User Guide.

Changes (10.1.1 Update 1)


This section describes changes in version 10.1.1 Update 1.

94
PowerExchange Adapters for Informatica
This section describes PowerExchange adapter changes in version 10.1.1 Update 1.

PowerExchange for Amazon S3


Effective in version 10.1.1 Update 1, PowerExchange for Amazon S3 has the following advanced properties
for an Amazon S3 data object read and write operation:

• Folder Path
• Download S3 File in Multiple Parts
• Staging Directory
Previously, the advanced properties for an Amazon S3 data object read and write operation were:

• S3 Folder Path
• Enable Download S3 Files in Multiple Parts
• Local Temp Folder Path

For more information, see the Informatica 10.1.1 Update 1 PowerExchange for Amazon S3 User Guide.

Release Tasks (10.1.1 Update 1)


This section describes the release tasks for version 10.1.1 Update 1.

PowerExchange Adapters for Informatica


This section describes PowerExchange adapter release tasks for version 10.1.1 Update 1.

PowerExchange for Teradata Parallel Transporter API


Effective in version 10.1.1 Update 1, if you use a Teradata PT connection to run a mapping on a Hortonworks
cluster and on the Blaze engine, the Data Integration Service invokes the Hortonworks Connector for
Teradata at run time. The Data Integration Service then runs the mapping through Sqoop.

If you had configured Teradata Connector for Hadoop (TDCH) to run Teradata mappings on the Blaze engine
and installed 10.1.1 Update 1, the Data Integration Service ignores the TDCH configuration. You must
perform the following upgrade tasks to run Teradata mappings on the Blaze engine:

1. Install 10.1.1 Update 1.


2. Download the Hortonworks Connector for Teradata JAR files.
3. Move the configuration parameters that you defined in the [Link] file to the Additional
Sqoop Arguments field in the Teradata PT connection. See the Hortonworks for Teradata Connector
documentation for a list of arguments that you can specify.

Note: If you had configured TDCH to run Teradata mappings on the Blaze engine and on a distribution other
than Hortonworks, do not install 10.1.1 Update 1. You can continue to use version 10.1.1 to run mappings
with TDCH on the Blaze engine and on a distribution other than Hortonworks.

For more information, see the Informatica 10.1.1 Update 1 PowerExchange for Teradata Parallel Transporter
API User Guide.

Release Tasks (10.1.1 Update 1) 95


Chapter 8

New Products (10.1.1)


This chapter includes the following topics:

• Intelligent Streaming, 96
• PowerExchange Adapters, 97

Intelligent Streaming
With the advent of big data technologies, organizations are looking to derive maximum benefit from the
velocity of data, capturing it as it becomes available, processing it, and responding to events in real time. By
adding real-time streaming capabilities, organizations can leverage the lower latency to create a complete,
up-to-date view of customers, deliver real-time operational intelligence to customers, improve fraud
detection, reduce security risk, improve physical asset management, improve total customer experience, and
generally improve their decision-making processes by orders of magnitude.

In 10.1.1, Informatica introduces Intelligent Streaming, a new product to help IT derive maximum value from
real-time queues by streaming data, processing it, and extracting meaningful business value in near real time.
Customers can process diverse data types and from non-traditional sources, such as website log file data,
sensor data, message bus data, and machine data, in flight and with high degrees of accuracy.

Intelligent Streaming is built as a capability extension of Informatica's Intelligent Data Platform and provides
the following benefits for IT:

• Create and run streaming (continuous-processing) mappings.


• Collect events from real-time queues such as Apache Kafka and JMS.
• Transform the data, create business rules for the transformed data, detect real-time patterns, and drive
automated responses or alerts.
• Provide management and monitoring capabilities of streams at runtime.
• Provide at-least-once delivery guarantees.
• Granulate lifecycle controls based on number of rows processed or time of execution.
• Reuse and maintain event processing logic, including batch mappings (after some modifications).

Intelligent Streaming has the following features:


Capture and Transport Stream Data

You can stream the following types of data from sources such as Kafka or JMS, in JSON, XML, or Avro
formats:

• Application and infrastructure log data

96
• Change data capture (CDC) from relational databases
• Clickstreams from web servers
• Social media event streams
• Time-series data from IoT devices
• Message bus data
• Programmable logic controller (PLC) data
• Point of sale data from devices

In addition, Informatica customers can leverage Informatica's Vibe Data Stream (licensed separately) to
collect and ingest data in real time, for example, data from sensors, and machine logs, to a Kafka queue.
Intelligent Streaming can then process this data.

Refine, Enrich, Analyze, and Process Stream Data

Use the underlying processing platform to run the following complex data transformations in real time
without coding or scripting:

• Window Transformation for Streaming use cases with the option of sliding and tumbling windows.
• Filter, Expression, Union, Router, Aggregate, Joiner, Lookup, Java, and Sorter transformations can
now be used with Streaming mappings and are executed on Spark Streaming.
• Lookup transformations can be used with Flat file, HDFS, Sqoop, and Hive.

Publish Data

You can stream data to different types of targets, such as Kafka, HDFS, NoSQL databases, and
enterprise messaging systems.

Intelligent Streaming is built on the Informatica Big Data Platform platform and extends the platform to
provide streaming capabilities. Intelligent Streaming uses Spark Streaming to process streamed data. It uses
YARN to manage the resources on a Spark cluster more efficiently and uses third-parties distributions to
connect to and push job processing to a Hadoop environment.

Use Informatica Developer (the Developer tool) to create streaming mappings. Use the Hadoop run-time
environment and the Spark engine to run the mapping. You can configure high availability to run the
streaming mappings on the Hadoop cluster.

For more information about Intelligent Streaming, see the Informatica Intelligent Streaming User Guide.

PowerExchange Adapters

PowerExchange Adapters for Informatica


This section describes new Informatica adapters in version 10.1.1.

PowerExchange for Amazon S3


Effective in version 10.1.1, you can create an Amazon S3 connection to specify the location of Amazon S3
sources and targets you want to include in a data object. You can use the Amazon S3 connection in data
object read and write operations. You can validate and run mappings in the native environment or on the
Blaze engine in the Hadoop environment.

For more information, see the Informatica PowerExchange for Amazon S3 10.1.1 User Guide.

PowerExchange Adapters 97
Chapter 9

New Features (10.1.1)


This chapter includes the following topics:

• Application Services, 98
• Big Data, 99
• Business Glossary , 103
• Command Line Programs, 103
• Enterprise Information Catalog, 105
• Informatica Analyst, 108
• Informatica Installation, 108
• Intelligent Data Lake, 109
• Mappings , 110
• Metadata Manager, 110
• PowerExchange Adapters, 111
• Security, 113
• Transformations, 113
• Web Services , 117
• Workflows, 117

Application Services
This section describes new application service features in version 10.1.1.

Analyst Service
Effective in version 10.1.1, you can configure an Analyst Service to store all audit data for exception
management tasks in a single database. The database stores a record of the work that users perform on
Human task instances in the Analyst tool that the Analyst Service specifies.

Set the database connection and the schema for the audit tables on the Human task properties of the Analyst
Service in the Administrator tool. After you specify a connection and schema, use the Actions menu options
in the Administrator tool to create the audit database contents. Or, use the infacmd as commands to set the
database and schema and to create the audit database contents. To set the database and the schema, run
infacmd as updateServiceOptions. To create the database contents, run infacmd as
createExceptionAuditTables

98
If you do not specify a connection and schema, the Analyst Service creates audit tables for each task
instance in the database that stores the task instance data.

For more information, see the Informatica 10.1.1 Application Service Guide and the Informatica 10.1.1
Command Reference.

Big Data
This section describes new big data features in version 10.1.1.

Blaze Engine
Effective in version 10.1.1, the Blaze engine has the following new features:

Hive Sources and Targets on the Blaze Engine


Effective in version 10.1.1, Hive sources and targets have the following additional support on the Blaze
engine:

• Hive decimal data type values with precision 38


• Quoted identifiers in Hive table names, column names, and schema names
• Partitioned Hive tables as targets
• Bucketed Hive tables as source and targets
• SQL overrides for Hive sources
• Table locking for Hive sources and targets
• Create or replace target tables for Hive targets
• Truncate target table for Hive targets and Hive partitioned tables
For more information, see the "Mapping Objects in the Hadoop Environment" chapter in the Informatica Big
Data Management® 10.1.1 User Guide.

Transformation Support on the Blaze Engine


Effective in version 10.1.1, transformations have the following additional support on the Blaze engine:

• Lookup transformation. You can use SQL overrides and filter queries with Hive lookup sources.
• Sorter transformation. Global sorts are supported when the Sorter transformation is connected to a flat
file target. To maintain global sort order, you must enable the Maintain Row Order property in the flat file
target. If the Sorter transformation is midstream in the mapping, then rows are sorted locally.
• Update Strategy transformation. The Update Strategy transformation is supported with some restrictions.

For more information, see the "Mapping Objects in the Hadoop Environment" chapter in the Informatica Big
Data Management 10.1.1 User Guide.

Blaze Engine Monitoring


Effective in Version 10.1.1, more detailed statistics about mapping jobs are available in the Blaze Summary
Report. In the Blaze Job Monitor, a green summary report button appears beside the names of successful
grid tasks which opens the Blaze Summary Report.

The Blaze Summary Report contains the following information about a mapping job:

• Time taken by individual segments. A pie chart of segments within the grid task.

Big Data 99
• Mapping properties. A table containing basic information about the mapping job.
• Tasklet execution time. A time series graph of all tasklets within the selected segment.
• Selected tasklet information. Source and target row counts and cache information for each individual
tasklet.

Note: The Blaze Summary Report is in beta. It contains most of the major features, but is not yet complete.

Blaze Engine Logs


Effective in version 10.1.1, the following error logging enhancements are available on the Blaze engine:

• Execution statistics are available in the LDTM log when the log tracing level is set to verbose initialization
or verbose data. The log includes the following mapping execution details:
- Start time, end time, and state of each task

- Blaze Job Monitor URL

- Number of total, succeeded, and failed/cancelled tasklets

- Number of processed and rejected rows for sources and targets

- Data errors, if any, for transformations in each executed segment


• The LDTM log includes the following transformation statistics:
- Number of output rows for sources and targets

- Number of error rows for sources and targets


• The session log also displays a list of all segments within the grid task with corresponding links to the
Blaze Job Monitor. Click on a link to see the execution details of that segment.
For more information, see the "Monitoring Mappings in a Hadoop Environment" chapter in the Informatica Big
Data Management 10.1.1 User Guide.

Installation and Configuration


This section describes new features related to big data installation and configuration.

Address Reference Data Installation


Effective in version 10.1.1, Informatica Big Data Management installs with a shell script that you can use to
install address reference data files. The script installs the reference data files on the compute nodes that you
specify.

When you run an address validation mapping in a Hadoop environment, the reference data files must reside
on each compute node on which the mapping runs. Use the script to install the reference data files on
multiple nodes in a single operation.

The shell script name is [Link].

Find the script in the following directory in the Informatica Big Data Management installation:

[Informatica installation directory]/tools/dq/av

When you run the script, you can enter the following information:

• The current location of the reference data files.


• The directory to which the script installs the files.
• The location of the file that contains the compute node names.
• The user name of the user who runs the script.

100 Chapter 9: New Features (10.1.1)


If you do not enter the information, the script uses a series of default values to identify the file locations and
the user name.

For more information, see the Informatica Big Data Management 10.1.1 Installation and Configuration Guide.

Hadoop Configuration Manager in Silent Mode


Effective in version 10.1.1, you can use the Hadoop Configuration Manager in silent mode to configure Big
Data Mangement.

For more information about configuring Big Data Management in silent mode, see the Informatica Big Data
Management 10.1.1 Installation and Configuration Guide.

Installation in an Ambari Stack


Effective in version 10.1.1, you can use the Ambari configuration manager to install Big Data Management as
a service in an Ambari stack.

For more information about installing Big Data Management in an Ambari stack, see the Informatica 10.1.1
Big Data Management Installation and Configuration Guide.

Script to Populate HDFS in HDInsight Clusters


Effective in version 10.1.1, you can use a script to populate the HDFS file system on an Azure HDInsight
cluster when you configure the cluster for Big Data Management.

For more information about using the script to populate the HDFS file system, see the Informatica Big Data
Management 10.1.1 Installation and Configuration Guide.

Spark Engine
Effective in version 10.1.1, the Spark engine has the following new features:

Binary Data Types


Effective in version 10.1.1, the Spark engine supports binary data type for the following functions:

• DEC_BASE64
• ENC_BASE64
• MD5
• UUID4
• UUID_UNPARSE
• CRC32
• COMPRESS
• DECOMPRESS (ignores precision)
• AES Encrypt
• AES Decrypt

Note: The Spark engine does not support binary data type for the join and lookup conditions.

For more information, see the "Function Reference" chapter in the Informatica Big Data Management 10.1.1
User Guide.

Big Data 101


Transformation Support on the Spark Engine
Effective in version 10.1.1, transformations have the following additional support on the Spark engine:

• The Java transformation is supported with some restrictions.


• The Lookup transformation can access a Hive lookup source.
For more information, see the "Mapping Objects in the Hadoop Environment" chapter in the Informatica Big
Data Management 10.1.1 User Guide.

Run-time Statistics for Spark Engine Job Runs


Effective in version 10.1.1, you can view summary and detailed statistics for mapping jobs run on the Spark
engine.

You can view the following Spark summary statistics in the Summary Statistics view:

• Source. The name of the mapping source file.


• Target. The name of the target file.
• Rows. The number of rows read for source and target.
The Detailed Statistics view displays a graph of the row counts for Spark engine job runs.

For more information, see the "Mapping Objects in the Hadoop Environment" chapter in the Informatica Big
Data Management 10.1.1 User Guide.

Security
This section describes new big data security features in version 10.1.1.

Fine-Grained SQL Authorization Support for Hive Sources


Effective in version 10.1.1, you can configure a Hive connection to observe fine-grained SQL authorization
when a Hive source table uses this level of authorization. Enable the Observe Fine Grained SQL Authorization
option in the Hive connection to observe row and column-level restrictions that are configured for Hive tables
and views.

For more information, see the Authorization section in the "Introduction to Big Data Management Security"
chapter of the Informatica 10.1.1 Big Data Management Security Guide.

Spark Engine Security Support


Effective in version 10.1.1, the Spark engine supports the following additional security systems:

• Apache Sentry on Cloudera CDH clusters


• Apache Ranger on Hortonworks HDP clusters
• HDFS Transparent Encryption on Hadoop distributions that the Spark engine supports
• Operating system profiles on Hadoop distributions that the Spark engine supports
For more information, see the "Introduction to Big Data Management Security" chapter in the Informatica Big
Data Management 10.1.1 Security Guide.

102 Chapter 9: New Features (10.1.1)


Sqoop
Effective in version 10.1.1, you can use the following new features when you configure Sqoop:

• You can run Sqoop mappings on the Blaze engine.


• You can run Sqoop mappings on the Spark engine to read data from or write data to Oracle databases.
• When you run Sqoop mappings on the Blaze and Spark engines, you can configure partitioning. You can
also run the mappings on a Hadoop cluster that uses Kerberos authentication.
• When you run Sqoop mappings on the Blaze engine to read data from or write data to Teradata, you can
use the following specialized connectors:
- Cloudera Connector Powered by Teradata

- Hortonworks Connector for Teradata

These specialized connectors use native protocols to connect to the Teradata database.
For more information, see the Informatica 10.1.1 Big Data Management User Guide.

Business Glossary
This section describes new Business Glossary features in version 10.1.1.

Export Rich Text as Plain Text


Effective in version 10.1.1, you can export rich text glossary content as plain text. The export option is
available in the glossary export wizard and in the command line program.

For more information, see the "Glossary Administration " chapter in the Informatica 10.1.1 Business Glossary
Guide.

Include Rich Text Content for Conflicting Assets


Effective in version 10.1.1, you can choose to import properties that are formatted as rich text or are of long
string data type, from the import file, when the Analyst tool detects conflicting assets.

The import option is available in the glossary import wizard and in the command line program.

For more information, see the "Glossary Administration" chapter in the Informatica 10.1.1 Business Glossary
Guide.

Command Line Programs


This section describes new commands in version 10.1.1.

Business Glossary 103


infacmd as Commands
The following table describes new infacmd as commands:

Command Description

CreateExceptionAuditTables Creates the audit tables for the Human task instances that the Analyst Service
specifies.

DeleteExceptionAuditTables Deletes the audit tables for the Human task instances that the Analyst Service
specifies.

The following table describes new options for infacmd as commands:

Command Description

UpdateServiceOptions - [Link]
Identifies the database to store the audit trail tables for exception management tasks.
- [Link]
Identifies the schema to store the audit trail tables for exception management tasks.

For more information, see the "Infacmd as Command Reference" chapter in the Informatica 10.1.1 Command
Reference.

infacmd dis command


The following table describes new infacmd dis command:

Command Description

replaceMappingHadoopRuntimeConnections Replaces the Hadoop connection of all mappings in deployed


applications with another Hadoop connection. The Data Integration
Service uses the Hadoop connection to connect to the Hadoop cluster
to run mappings in the Hadoop environment.

For more information, see the "infacmd dis Command Reference" chapter in the Informatica 10.1.1 Command
Reference.

infacmd mrs command


The following table describes new infacmd mrs command:

Command Description

replaceMappingHadoopRuntimeConnections Replaces the Hadoop connection of all mappings in the repository with
another Hadoop connection. The Data Integration Service uses the
Hadoop connection to connect to the Hadoop cluster to run mappings
in the Hadoop environment.

For more information, see the "infacmd mrs Command Reference" chapter in the Informatica 10.1.1
Command Reference.

104 Chapter 9: New Features (10.1.1)


pmrep Commands
The following table describes an updated option for a pmrep command:

Command Description

Validate Contains the following updated option:


-n (object_name). Required. Name of the object to validate. Do not use this option if you use the -i
argument.
When you validate a non-reusable session, include the workflow name. Enter the workflow name and the
session name in the following format:
<workflow name>.<session instance name>
When you validate a non-reusable session in a non-reusable worklet, enter the workflow name, worklet
name, and session name in the following format:
<workflow name>.<worklet name>.<session instance name>

For more information, see the "pmrep Command Reference" chapter in the Informatica 10.1.1 Command
Reference.

Enterprise Information Catalog


This section describes new features in Enterprise Information Catalog version 10.1.1.

Business Glossary Integration


Effective in version 10.1.1, Analyst tool business glossaries are fully integrated with Enterprise Information
Catalog.

You can perform the following tasks with business glossary assets:

View business glossary assets in the catalog.

You can search for and view the full details for a business term, category, or policy in Enterprise
Information Catalog. When you view the details for a business term, Enterprise Information Catalog also
displays the glossary assets, technical assets, and other assets, such as Metadata Manager objects, that
the term is related to.

When you view a business glossary asset in the catalog, you can open the asset in the Analyst tool
business glossary for further analysis.

Associate an asset with a business term.

You can associate a business term with a technical asset to make an asset easier to understand and
identify in the catalog. For example, you associate business term "Movie Details" with a relational table
named "mv_dt." Enterprise Information Catalog displays the term "Movie Details" next to the asset name
in the search results, in the Asset Details view, and optionally, in the lineage and impact diagram.

When you associate a term with an asset, Enterprise Information Catalog provides intelligent
recommendations for the association based on data domain discovery.

For more information about business glossary assets, see the "View Assets" chapter in the Informatica 10.1.1
Enterprise Information Catalog User Guide.

Enterprise Information Catalog 105


Column Similarity Profiling
Effective in version 10.1.1, you can configure and perform column similarity profiling. Column similarity
profiling implies preparing metadata extracted from data sources for discovering similar columns in your
enterprise data. You can then attach data domains to similar columns for faster and efficient searches for
similar data in Enterprise Information Catalog.

Enterprise Information Catalog supports column similarity profiling for the following resource scanners:

• Amazon Redshift
• Amazon S3
• Salesforce
• HDFS
• Hive
• IBM DB2
• IBM DB2 for z/OS
• IBM Netezza
• JDBC
• Microsoft SQL Server
• Oracle
• Sybase
• Teradata
• SAP

Data Domains and Data Domain Groups


Effective in version 10.1.1, you can create data domains and data domain groups in Enterprise Information
Catalog. You can group logical data domains in a data domain group.

A data domain is a predefined or user-defined Model repository object based on the semantics of column
data or a column name. Examples include Social Security number, phone number, and credit card number.

You can create data domains based on data rules or column name rules defined in the Informatica Analyst
Tool or the Informatica Developer Tool. Alternatively, you can create data domains based on existing
columns in the catalog. You can define proximity rules to configure inference for new data domains from
existing data domains configured in the catalog.

Lineage and Impact Analysis


Effective in version 10.1.1, lineage and impact diagrams have expanded functionality. The Lineage and
Impact view also contains a tabular impact summary that lists the assets that impact and are impacted by
the asset that you are studying.

The Lineage and Impact view has the following enhancements:

106 Chapter 9: New Features (10.1.1)


Diagram enhancements

The lineage and impact diagram has the following enhancements:

• By default, the lineage and impact diagram displays the origins, the asset that you are studying, and
the destinations for the data. You can use the slider controls to reveal intermediate assets one at-a-
time by distance from the seed asset or to fully expand the diagram. You can also expand all assets
within a particular data flow path.
• You can display the child assets of the asset that you are studying, all the way down to the column or
field level. When you drill-down on an asset, the diagram displays the child assets that you select and
the assets to which the child assets are linked.
• You can display the business terms that are associated with the technical assets in the diagram.
• You can print the diagram and export it to a scalable vector graphics (.svg) file.

Impact analysis

When you open the Lineage and Impact view for an asset, you can switch from the diagram view to the
tabular asset summary. The tabular asset summary lists all of the assets that impact and are impacted
by the asset that you are studying. You can export the asset summary to a Microsoft Excel file to create
reports or further analyze the data.

For more information about lineage and impact analysis, see the "View Lineage and Impact" chapter in the
Informatica 10.1.1 Enterprise Information Catalog User Guide.

Permissions for Users and User Groups


Effective in version 10.1.1, you can configure permissions for users and user groups on resources configured
in Enterprise Information Catalog. You can specify permissions to view the resource metadata in Enterprise
Information Catalog or view and enrich the resource metadata in Enterprise Information Catalog. You can
also deny permissions to view or enrich resource metadata in Enterprise Information Catalog for specific
users and user groups.

New Resource Types


Effective in version 10.1.1, you can create resources for the following data source types:

Oracle Business Intelligence

Extract metadata from the Business intelligence tool from Oracle that includes analysis and reporting
capabilities.

Informatica Master Data Management

Extract metadata about critical information within an organization from Informatica Master Data
Management.
Microsoft SQL Server Integration Service

Extract metadata about data integration and workflow applications from Microsoft SQL Server
Integration Service.

SAP

Extract metadata from SAP application platform that integrates multiple business applications and
solutions.

Hive on Amazon Elastic MapReduce

Extract metadata from files in Amazon Elastic MapReduce using a Hive resource.

Enterprise Information Catalog 107


Hive on Azure HDInsight

Extract metadata from files in Azure HDInsight using a Hive resource.

Synonym Definition Files


Effective in version 10.1.1, you can upload synonym definition files to Enterprise Information Catalog.
Synonym definition files include synonyms defined for table names, column names, data domains and other
assets in the catalog. You can search for the assets in the Enterprise Information Catalog using the defined
synonyms.

Universal Connectivity Framework


Effective in version 10.1.1, Enterprise Information Catalog introduces the Universal Connectivity Framework.
Using the framework, you can build custom resources to extract metadata from a range of data sources
supported by MITI.

Informatica Analyst
This section describes new Analyst tool features in version 10.1.1.

Profiles
This section describes new Analyst tool features for profiles and scorecards.

Drilldown on Scorecards
Effective in version 10.1.1, when you click a data series or data point in the scorecard dashboard, the
scorecards that map to the data series or data point appears in the assets list pane.

For more information about scorecards, see the "Scorecards in Informatica Analyst" chapter in the
Informatica 10.1.1 Data Discovery Guide.

Informatica Installation
This section describes new installation features in version 10.1.1.

Informatica Upgrade Advisor


Effective in version 10.1.1, you can run the Informatica Upgrade Advisor to check for conflicts and
deprecated services in the domain before you perform an upgrade.

For more information about the upgrade advisor, see the Informatica Upgrade Guides.

108 Chapter 9: New Features (10.1.1)


Intelligent Data Lake
This section describes new Intelligent Data Lake features in version 10.1.1.

Data Preview for Tables in External Sources


Effective in version 10.1.1, you can preview sample data for external (outside Hadoop data lake) tables if
these sources are cataloged. The administrator needs to configure JDBC connections with Sqoop and
provide the analysts with requisite permissions. The analyst can connect to the data source using these
connections to view the data from assets that are not in the data lake.

For more information, see the "Discover Data" chapter in the 10.1.1 Intelligent Data Lake User Guide.

Importing Data From Tables in External Sources


Effective in version 10.1.1, you can import data from tables in external sources (outside Hadoop data lake),
such as Oracle and Teradata, into the data lake if these sources are already cataloged. The administrator
needs to configure JDBC connections with Sqoop to the external sources and provide access to the analyst.
The analyst can use these connections to preview the data asset and import into the lake based on their
needs.

For more information, see the "Discover Data" chapter in the 10.1.1 Intelligent Data Lake User Guide.

Exporting Data to External Targets


Effective in version 10.1.1, you can export a data asset or a publication to external targets (outside Hadoop
data lake), such as Oracle and Teradata. The administrator needs to configure the JDBC connections with
Sqoop to the external sources and provide access to the analyst. The analyst can use these connections to
export the data asset to the external database.

For more information, see the "Discover Data" chapter in the 10.1.1 Intelligent Data Lake User Guide.

Configuring Sampling Criteria for Data Preparation


Effective in version 10.1.1, you can specify sampling criteria that best suits your needs for data preparation
for a given data asset. You can choose to include only a few columns during preparation and filter the data,
choose number of rows to sample, and select Random or First N rows as sample.

For more information, see the "Prepare Data" chapter in the 10.1.1 Intelligent Data Lake User Guide.

Performing a Lookup on Worksheets


Effective in version 10.1.1, you can perform a lookup. Use the lookup function to lookup a key column in
another sheet and fetch values in corresponding other columns in that looked up sheet.

For more information, see the "Prepare Data" chapter in the 10.1.1 Intelligent Data Lake User Guide.

Downloading as a TDE File


Effective in version 10.1.1, you can download data in data lake assets as a TDE file. You can directly open the
downloaded file in Tableau. You can search for any data asset and download it as a CSV file or TDE file.

For more information, see the "Discover Data" chapter in the 10.1.1 Intelligent Data Lake User Guide.

Intelligent Data Lake 109


Sentry and Ranger Support
Effective in version 10.1.1, Intelligent Data Lake supports Sentry and Ranger on Cloudera and Hortonworks.
Ranger and Sentry offer a centralized security framework to manage granular level access control on
Cloudera and Hortonworks. You can create authorization rules or policies to control the access of data.
Sentry and Ranger support SQL-based authorization for data lake assets.

Mappings
This section describes new mapping features in version 10.1.1.

Informatica Mappings
This section describes new Informatica mappings features in version 10.1.1.

Export Parameters to a Parameter File


You can export a mapping parameter file or a workflow parameter file from the Developer tool. You can
export a parameter file that contains mapping parameters or workflow parameters that you define in the
Developer tool. The Developer tool creates a parameter file in .xml format. Export parameters from the
mapping Parameters tab or from the workflow Parameters tab. Use the parameter file when you run deployed
mappings or workflows.

For more information, see the "Mapping Parameters" chapter in the Informatica Developer 10.1.1 Mapping
Guide or the "Workflow Parameters" chapter in the Informatica Developer 10.1.1 Workflow Guide.

Metadata Manager
This section describes new Metadata Manager features in version 10.1.1.

Dataset Extraction for Cloudera Navigator Resources


Effective in version 10.1.1, Metadata Manager can extract HDFS datasets from Cloudera Navigator. Metadata
Manager displays the datasets in the metadata catalog within the HDFS Datasets logical group.

For more information about Cloudera Navigator resources, see the "Database Management Resources"
chapter in the Informatica 10.1.1 Metadata Manager Administrator Guide.

Mapping Extraction for Informatica Platform Resources


Effective in version 10.1.1, Informatica Platform resources can extract metadata for mappings in deployed
workflows.

Informatica Platform resources that are based on version 10.1.1 applications can extract metadata for
mappings in deployed workflows in addition to mappings that are deployed directly to the application.

When Metadata Manager extracts a mapping in a deployed workflow, it adds the workflow name and
Mapping task name to the mapping name as a prefix. Metadata Manager displays the mapping in the
metadata catalog within the Mappings logical group.

110 Chapter 9: New Features (10.1.1)


For more information about Informatica Platform resources, see the "Data Integration Resources" chapter in
the Informatica 10.1.1 Metadata Manager Administrator Guide.

PowerExchange Adapters
This section describes new PowerExchange adapter features in version 10.1.1

PowerExchange® Adapters for Informatica


This section describes new Informatica adapter features in version 10.1.1.

PowerExchange for Amazon Redshift


Effective in version 10.1.1, you can enable PowerExchange for Amazon Redshift to run a mapping on the
Blaze engine. When you run the mapping, the Data Integration Service pushes the mapping to a Hadoop
cluster and processes the mapping on the Blaze engine, which significantly increases the performance.

For more information, see the Informatica PowerExchange for Amazon Redshift 10.1.1 User Guide.

PowerExchange for Cassandra


Effective in version 10.1.1, PowerExchange for Cassandra supports the following features:

• You can use the following advanced ODBC driver configurations with PowerExchange for Cassandra:
- Load balancing policy. Determines how the queries are distributed to nodes in a Cassandra cluster
based on the specified DC Aware or Round-Robin policy.
- Filtering. Limits the connections of the drivers to a predefined set of hosts.
• You can enable the following arguments in the ODBC driver to optimize the performance:
- Token Aware. Improves the query latency and reduces load on the Cassandra node.

- Latency Aware. Ignores the slow performing Cassandra nodes while sending queries.

- Null Value Insertion. Enables you to specify null values in an INSERT statement.

- Case Sensitive. Enables you to specify schema, table, and column names in a case-sensitive fashion.

• You can process Cassandra sources and targets that contain the date, smallint, and tinyint data types

For more information, see the Informatica PowerExchange for Cassandra 10.1.1 User Guide.

PowerExchange for HBase


Effective in version 10.1.1, you can enable PowerExchange for HBase to run a mapping on a Blaze or Spark
engine. When you run the mapping, the Data Integration Service pushes the mapping to a Hadoop cluster and
processes the mapping on the selected engine, which significantly increases the performance.

For more information, see the Informatica PowerExchange for HBase 10.1.1 User Guide.

PowerExchange for Hive


Effective in version 10.1.1, you can configure the Lookup transformation on Hive data objects in mappings in
the native environment.

For more information, see the Informatica PowerExchange for Hive 10.1.1 User Guide.

PowerExchange Adapters 111


PowerExchange Adapters for PowerCenter®
This section describes new PowerCenter adapter features in version 10.1.1.

PowerExchange for Amazon Redshift


Effective in version 10.1.1, you can perform the following tasks with PowerExchange for Amazon Redshift:

• You can configure partitioning for Amazon Redshift sources and targets. You can configure the partition
information so that the PowerCenter Integration Service determines the number of partitions to create at
run time.
• You can include a Pipeline Lookup transformation in a mapping.
• The PowerCenter Integration Service can push expression, aggregator, operator, union, sorter, and filter
functions to Amazon Redshift sources and targets when the connection type is ODBC and the ODBC
Subtype is selected as Redshift.
• You can configure advanced filter properties in a mapping.
• You can configure pre-SQL and post-SQL queries for source and target objects in a mapping.
• You can configure a Source transformation to select distinct rows from the Amazon Redshift table and
sort data.
• You can parameterize source and target table names to override the table name in a mapping.
• You can define an SQL query for source and target objects in a mapping to override the default query. You
can enter an SQL statement supported by the Amazon Redshift database.

For more information, see the Informatica 10.1.1 PowerExchange for Amazon Redshift User Guide for
PowerCenter.

PowerExchange for Cassandra


Effective in version 10.1.1, PowerExchange for Cassandra supports the following features:

• You can use the following advanced ODBC driver configurations with PowerExchange for Cassandra:
- Load balancing policy. Determines how the queries are distributed to nodes in a Cassandra cluster
based on the specified DC Aware or Round-Robin policy.
- Filtering. Limits the connections of the drivers to a predefined set of hosts.
• You can enable the following arguments in the ODBC driver to optimize the performance:
- Token Aware. Improves the query latency and reduces load on the Cassandra node.

- Latency Aware. Ignores the slow performing Cassandra nodes while sending queries.

- Null Value Insertion. Enables you to specify null values in an INSERT statement.

- Case Sensitive. Enables you to specify schema, table, and column names in a case-sensitive fashion.

• You can process Cassandra sources and targets that contain the date, smallint, and tinyint data types.

For more information, see the Informatica PowerExchange for Cassandra 10.1.1 User Guide for PowerCenter.

PowerExchange for Vertica


Effective in version 10.1.1, PowerExchange for Vertica supports compressing data in GZIP format. When you
use bulk mode to write large volumes of data to a Vertica target, you can configure the session to create a
staging file. On UNIX operating systems, when you enable file staging, you can also compress the data in a
GZIP format. By compressing the data, you can reduce the size of data that is transferred over the network
and improve session performance.

To compress data, you must re-register the PowerExchange for Vertica plug-in with the PowerCenter
repository.

For more information, see the Informatica PowerExchange for Vertica 10.1.1 User Guide for PowerCenter.

112 Chapter 9: New Features (10.1.1)


Security
This section describes new security features in version 10.1.1.

Custom Kerberos Libraries


Effective in version 10.1.1, you can configure custom or native database clients and Informatica processes
within an Informatica domain to use custom Kerberos libraries instead of the default Kerberos libraries that
Informatica uses.

For more information, see the "Kerberos Authentication Setup" chapter in the Informatica 10.1.1 Security
Guide.

Scheduler Service Support in Kerberos-Enabled Domains


Effective in version 10.1.1, you can use the Scheduler Service to run mappings, workflows, profiles and
scorecards in a domain that uses Kerberos authentication.

Single Sign-on for Informatica Web Applications


Effective in version 10.1.1, you can configure single sign-on (SSO) using Security Assertion Markup Language
(SAML) to log into the Administrator tool, the Analyst tool and the Monitoring tool.

Security Assertion Markup Language is an XML-based data format for exchanging authentication and
authorization information between a service provider and an identity provider. In an Informatica domain, the
Informatica web application is the service provider. Microsoft Active Directory Federation Services (AD FS)
2.0 is the identity provider, which authenticates web application users with your organization's LDAP or
Active Directory identity store.

For more information, see the "Single Sign-on for Informatica Web Applications" chapter in the Informatica
10.1.1 Security Guide.

Transformations
This section describes new transformation features in version 10.1.1.

Informatica Transformations
This section describes new features in Informatica transformations in version 10.1.1.

Address Validator Transformation


This section describes the new Address Validator transformation features.

The Address Validator transformation contains additional address functionality for the following countries:

Security 113
All Countries
Effective in version 10.1.1, you can add the Count Number port to an output address. The Count Number port
value indicates the position of each address in a set of suggestions that the transformation returns in
interactive mode or suggestion list mode.

For example, the Count Number port returns the number 1 for the first address in the set. The port returns the
number 2 for the second address in the set. The number increments by 1 for each address that address
validation returns.

Find the Count Number port in the Status Info port group.

China
Multi-language address parsing and verification

Effective in version 10.1.1, you can configure the Address Validator transformation to return the street
descriptor and street directional information in a valid China address in a transliterated Latin script
(Pinyin) or in English. The transformation returns the other elements in the address in the Hanzi script.

To specify the output language, set the Preferred Language advanced property on the transformation.

Single-line verification of China addresses in suggestion list mode

Effective in version 10.1.1, you can configure the Address Validator transformation to return valid
suggestions for a China address that you enter on a single line in fast completion mode. To enter an
address on a single line, select a Complete Address port from the Multiline port group. Enter the address
in the Hanzi script.

When you enter a partial address, the transformation returns one or more address suggestions for the
address that you enter. When you enter a complete valid address, the transformation returns the valid
version of the address from the reference database.

Ireland
Multi-language address parsing and verification

Effective in version 10.1.1, you can configure the Address Validator transformation to read and write the
street, locality, and county information for an Ireland address in the Irish language.

An Post, the Irish postal service, maintains the Irish-language information in addition to the English-
language addresses. You can include Irish-language street, locality, and county information in an input
address and retrieve the valid English-language version of the address. You can enter an English-
language address and retrieve an address that includes the street, locality, and county information in the
Irish language. Address validation returns all other information in English.

To specify the output language, set the Preferred Language advanced property on the transformation.

Rooftop geocoordinates in Ireland addresses

Effective in version 10.1.1, you can configure the Address Validator transformation to return rooftop
geocoordinates for an address in Ireland.

To return the geocoordinates, add the Geocoding Complete port to the output address. Find the
Geocoding Complete port in the Geocoding port group. To specify Rooftop geocoordinates, set the
Geocode Data Type advanced property on the transformation.

Support for preferred descriptors in Ireland addresses

Effective in version 10.1.1, you can configure the Address Validator transformation to return the short or
long forms of the following elements in the English language:

• Street descriptors

114 Chapter 9: New Features (10.1.1)


• Directional values

To specify a preference for the elements, set the Global Preferred Descriptor advanced property on the
transformation,

Note: The Address Validator transformation writes all street information to the street name field in an
Irish-language address.

Italy
Effective in version 10.1.1, you can configure the Address Validator transformation to add the ISTAT code to
a valid Italy address. The ISTAT code contains characters that identify the province, municipality, and region
to which the address belongs. The Italian National Institute of Statistics (ISTAT) maintains the ISTAT codes.

To add the ISTAT code to an address, select the ISTAT Code port. Find the ISTAT Code port in the IT
Supplementary port group.

Japan
Geocoding enrichment for Japan addresses

Effective in version 10.1.1, you can configure the Address Validator transformation to return standard
geocoordinates for addresses in Japan.

The transformation can return geocoordinates at multiple levels of accuracy. When a valid address
contains information to the Ban level, the transformation returns house number-level geocoordinates.
When a valid address contains information to the Chome level, the transformation returns street-level
geocoordinates. If an address does not contain Ban or Chome information, Address Verification returns
locality-level geocoordinates.
To return the geocoordinates, add the Geocoding Complete port to the output address. Find the
Geocoding Complete port in the Geocoding port group.

Single-line verification of Japan addresses in suggestion list mode

Effective in version 10.1.1, you can configure the Address Validator transformation to return valid
suggestions for a Japan address that you enter on a single line in suggestion list mode. You can retrieve
suggestions for an address that you enter in the Kanji script or the Kana script. To enter an address on a
single line, select a Complete Address port from the Multiline port group.

When you enter a partial address, the transformation returns one or more address suggestions for the
address that you enter. When you enter a complete valid address, the transformation returns the valid
version of the address from the reference database.

South Korea
Support for Revised Romanization transliteration in South Korea addresses

Effective in version 10.1.1, the Address Validator transformation can use the Revised Romanization
system to transliterate an address between Hangul and Latin character sets. To specify a character set
for output addresses from South Korea, use the Preferred Script advanced property.

Updates to post code verification in South Korea addresses

Effective in version 10.1.1, the Address Validator transformation adds a five-digit post code to a fully
valid input address that does not include a post code. The five-digit post code represents the current
post code format in use in South Korea. The transformation can add the five-digit post code to a fully
valid lot-based address and a fully valid street-based address.

To verify addresses in the older, lot-based format, use the Matching Extended Archive advanced
property.

Transformations 115
Spain
Effective in version 10.1.1, you can configure the Address Validator transformation to add the INE code to a
valid Spain address. The INE code contains characters that identify the province, municipality, and street in
the address. The National Institute of Statistics (INE) in Spain maintains the INE codes.

To add an INE code to an address, select one or more of the following ports:

• INE Municipality Code


• INE Province Code
• INE Street Code
Find the INE Code ports in the ES Supplementary port group.

United States
Support for CASS Cycle O requirements

Effective in version 10.1.1, the Address Validator transformation adds features that support the
proposed requirements of the Coding Accuracy Support System (CASS) Cycle O standard.

To prepare for the Cycle O standard, the transformation includes the following features:

• Private mailbox and commercial mail receiving agency identification


The United States Postal Service updates the CASS requirements for private mailbox (PMB)
addresses and commercial mail receiving agency (CMRA) addresses in Cycle O. To meet the Cycle O
standard, the Address Validator transformation adds PMB as a prefix before a private mailbox
number in a CMRA address. If a pound sign (#) precedes a private mailbox number in the address, the
transformation converts the pound sign to PMB. To comply with the Cycle O standard, the
transformation does not use the PMB number to verify Delivery Point Validation (DPV) data for an
address.
• DPV PBSA Indicator port for post office box street address (PBSA) identification
The United States Postal Service can recognize post office box addresses in a street address format.
To identify PBSA addresses in an address set, use the DPV PBSA Indicator port. Find the DPV PBSA
Indicator port in the US Specific port group.
For example, the following address identifies post office box number 3094 at a post office on South
Center Street:
131 S Center St Unit 3094
Collierville TN 38027-0419
• DPV ZIP Code Validation port for Form 3553 completion
The DPV ZIP Code Validation port indicates whether an address is valid for inclusion in the total
address count on CASS Form 3553. If an address passes delivery point validation but does not
include a deliverable ZIP+4 Code, you cannot include the address in the total address count. Find the
DPV ZIP Code Validation port in the US Specific port group.
Improved parsing of non-standard first-line data in United States addresses

Effective in version 10.1.1, the Address Validation transformation parses non-standard mailbox data into
sub-building elements. The non-standard data might identify a college campus mailbox or a courtroom
at a courthouse.

Support for global preferred descriptors in United States addresses

Effective in version 10.1.1, you can return the short or long forms of the following elements in a United
States address:

• Street descriptors

116 Chapter 9: New Features (10.1.1)


• Directional values
• Sub-building descriptors
To specify the format of the elements that the transformation returns, set the Global Preferred
Descriptor advanced property on the transformation.

For more information, see the Informatica 10.1.1 Developer Transformation Guide and the Informatica 10.1.1
Address Validator Port Reference.

Write Transformation
Effective in version 10.1.1, when you create a Write transformation from an existing transformation in a
mapping, you can specify the type of link for the input ports of the Write transformation.

You can link ports by name. Also, in a dynamic mapping, you can link ports by name, create a dynamic port
based on a mapping flow, or link ports at run time based on a link policy.

For more information, see the "Write Transformation" chapter in the Informatica 10.1.1 Developer
Transformation Guide.

Web Services
This section describes new web services features in version 10.1.1.

Informatica Web Services


This section describes new Informatica web service features in version 10.1.1.

REST Web Services


You can create an Informatica REST web service that returns data to a web service client in JSON or XML
format.

An Informatica REST web service is a web service that receives an HTTP request to perform a GET operation.
A GET operation retrieves data. The REST request is a simple URI string from an internet browser. The client
limits the web service output data by adding filter parameters to the URI.

Define a REST web service resource in the Developer tool. A REST web service resource contains the
definition of the REST web service response message and the mapping that returns the response. When you
create an Informatica REST web service, you can define the resource from a data object or you can manually
define the resource.

Workflows
This section describes new workflow features in version 10.1.1.

Informatica Workflows
This section describes new features in Informatica workflows in version 10.1.1.

Web Services 117


Terminate Event
Effective in version 10.1.1, you can add a Terminate event to a workflow. A Terminate event defines a point
before the End event at which the workflow can end. A workflow can contain one or more Terminate events.

A workflow terminates if you connect a task or a gateway to a Terminate event and the task output satisfies
a condition on the sequence flow. The Terminate event aborts the workflow before any further task in the
workflow can run.

Add a Terminate event to a workflow if the workflow data can reach a point at which there is no need to run
additional tasks. For example, you might add a Terminate event to end a workflow that contains a Mapping
task and a Human task. Connect the Mapping task to an Exclusive gateway, and then connect the gateway to
a Human task and to a Terminate event. If the Mapping task generates exception record data for the Human
task, the workflow follows the sequence flow to the Human task. If the Mapping task does not generate
exception record data, the workflow follows the sequence flow to the Terminate event.

For more information, see the Informatica 10.1.1 Developer Workflow Guide.

User Permissions on Human Tasks


Effective in version 10.1.1, you can set user permissions on Human task data. The permissions specify the
data that users can view and the types of action that users can perform in Human task instances in the
Analyst tool. You can set the permissions within a step in a Human task when you design a workflow. The
permissions apply to all users who can view or edit a task instance that the step defines.

By default, Analyst tool users can view all data and perform any action in the task instances that they work
on.

You can set viewing permissions and editing permissions. The viewing permissions define the data that the
Analyst tool displays for the task instances that the step defines. The editing permissions define the actions
that users can take to update the task instance data. Viewing permissions take precedence over editing
permissions. If you grant editing permissions on a column and you do not grant viewing permissions on the
column, Analyst tool users cannot edit the column data.

For more information, see the Informatica 10.1.1 Developer Workflow Guide.

Workflow Variables in Human Task Instance Notifications


Effective in version 10.1.1, you can use workflow variables to write information about a Human task instance
to an email notification. The variables record information about the task instance when a user completes,
escalates, or reassigns a task instance.

To display the list of variables, open the Human task and select the step that defines the Human task
instances. On the Notifications view, select the message body of the email notification and press the
$+CTRL+SPACE keys.

The notification can display the following variables:

$[Link]

The time that the workflow engine performs the user instruction to escalate, reassign, or complete the
task instance.

$[Link]

The owner of the task instance at the time that the workflow engine escalates or completes the task. Or,
the owner of the task instance after the engine reassigns the task instance.

118 Chapter 9: New Features (10.1.1)


$[Link]

The task instance status after the engine performs the user instruction to escalate, reassign, or
complete the task instance. The status names are READY and IN_PROGRESS.

$[Link]

The type of instruction that the engine performs. The variable values are escalate, reassign, and
complete.

$[Link]

The task instance identifier that the Analyst tool displays.

For more information, see the Informatica 10.1.1 Developer Workflow Guide.

Workflows 119
Chapter 10

Changes (10.1.1)
This chapter includes the following topics:

• Support Changes, 120


• Big Data, 122
• Business Glossary , 124
• Data Integration Service, 124
• Data Types, 125
• Informatica Analyst, 125
• Informatica Developer, 125
• Mappings, 126
• Enterprise information Catalog, 126
• Metadata Manager, 127
• PowerExchange Adapters, 128
• Transformations, 129
• Workflows, 129
• Documentation, 130

Support Changes
This section describes support changes in version 10.1.1 HotFix 2.

Big Data Management Hive Engine


Effective in version 10.1.1, Informatica dropped support for HiveServer2 which the Hive engine uses to run
mappings.

Previously, the Hive engine supported the Hive driver and HiveServer2 to run mappings in the Hadoop
environment. HiveServer2 and the Hive driver convert HiveQL queries to MapReduce or Tez jobs that are
processed on the Hadoop cluster.

If you install Big Data Management 10.1.1 or upgrade to version 10.1.1, the Hive engine uses the Hive driver
when you run the mappings. The Hive engine no longer supports HiveServer2 to run mappings in the Hadoop
environment. Hive sources and targets that use the HiveServer2 service on the Hadoop cluster are still
supported.

120
To run mappings in the Hadoop environment, Informatica recommends that you select all run-time engines.
The Data Integration Service uses a proprietary rule-based methodology to determine the best engine to run
the mapping.

For information about configuring the run-time engines for your Hadoop distribution, see the Informatica Big
Data Management 10.1.1 Installation and Configuration Guide. For information about mapping objects that the
run-time engines support, see the Informatica Big Data Management 10.1.1 User Guide.

Support Changes - Big Data Management Hadoop Distributions


The following table lists the supported Hadoop distribution versions and changes in Big Data Management
10.1.1:

At release date, version 10.1.1 supports the following Hadoop distributions:

• Azure HDInsight v. 3.4


• Cloudera CDH v. 5.8
• IBM BigInsights v. 4.2
• Hortonworks HDP v. 2.5
• Amazon EMR v. 5.0

To see a list of the latest supported versions, see the Product Availability Matrix on the Informatica Customer
Portal: [Link]
matrices.

MapR Support
Effective in version 10.1.1, Informatica deferred support for Big Data Management on a MapR cluster. To run
mappings on a MapR cluster, use Big Data Management 10.1. Informatica plans to reinstate support in a
future release.

Some references to MapR remain in documentation in the form of examples. Apply the structure of these
examples to your Hadoop distribution.

Amazon EMR Support


Effective in version 10.1.1, you can install Big Data Management in the Amazon EMR environment. You can
choose from the following installation methods:

• Download and install from an RPM package. When you install Big Data Management in an Amazon EMR
environment, you install Big Data Management elements on a local machine to run the Model Repository
Service, Data Integration Service, and other services.
• Install an Informatica instance in the Amazon cloud environment. When you create an implementation of
Big Data Management in the Amazon cloud, you bring online virtual machines where you install and run
Big Data Management.

For more information about installing and configuring Big Data Management on Amazon EMR, see the
Informatica Big Data Management 10.1.1 Installation and Configuration Guide.

Big Data Management Spark Support


Effective in version 10.1.1, you can configure the Spark engine on all supported Hadoop distributions. You
can configure Big Data Management to use one of the following Spark versions based on the Hadoop
distribution that you use:

• Cloudera Spark 1.6 and Apache Spark 2.0.1 for Cloudera cdh5u8 distribution.
• Apache Spark 2.0.1 for all Hadoop distributions.

Support Changes 121


For more information, see the Informatica Big Data Management 10.1.1 Installation and Configuration Guide.

Data Analyzer
Effective in version 10.1.1, Informatica dropped support for Data Analyzer. Informatica recommends that you
use a third-party reporting tool to run PowerCenter and Metadata Manager reports. You can use the
recommended SQL queries for building all the reports shipped with earlier versions of PowerCenter.

Operating System
Effective in version 10.1.1, Informatica added support for the following operating systems:

Solaris 11
Windows 10 for Informatica Clients

PowerExchange for SAP NetWeaver


Effective in version 10.1.1, Informatica implemented the following changes in PowerExchange for SAP
NetWeaver support:

Support Change Level of Comments


Support

Analytic Business Dropped Effective in version 10.1.1, Informatica dropped support for the Analytic
Components support Business Components (ABC) functionality. You cannot use objects in the
ABC repository to read and transform SAP data. Informatica will not ship
the ABC transport files.

SAP R/3 version 4.7 Dropped Effective in version 10.1.1, Informatica dropped support for SAP R/3 4.7
support systems.
Upgrade to SAP ECC version 5.0 or later.

Reporting and Dashboards Service


Effective in version 10.1.1, Informatica dropped support for the Reporting and Dashboards Service.
Informatica recommends that you use a third-party reporting tool to run PowerCenter and Metadata Manager
reports. You can use the recommended SQL queries for building all the reports shipped with earlier versions
of PowerCenter.

Reporting Service
Effective in version 10.1.1, Informatica dropped support for the Reporting Service. Informatica recommends
that you use a third-party reporting tool to run PowerCenter and Metadata Manager reports. You can use the
recommended SQL queries for building all the reports shipped with earlier versions of PowerCenter.

Big Data
This section describes the changes to big data in version 10.1.1.

122 Chapter 10: Changes (10.1.1)


Functions Supported in the Hadoop Environment
Effective in 10.1.1, the following support changes affect functions in the Hadoop environment:

Function Description Changes

AES_DECRYPT Returns decrypted data to string format. Supported on the Spark engine.
Previously supported only on the Blaze and
Hive engines.

AES_ENCRYPT Returns data in encrypted format. Supported on the Spark engine.


Previously supported only on the Blaze and
Hive engines.

COMPRESS Compresses data using the zlib 1.2.1 Supported on the Spark engine.
compression algorithm. Previously supported only on the Blaze and
Hive engines.

CRC32 Returns a 32-bit Cyclic Redundancy Check Supported on the Spark engine.
(CRC32) value. Previously supported only on the Blaze and
Hive engines.

DECOMPRESS Decompresses data using the zlib 1.2.1 Supported with restrictions on the Spark
compression algorithm. engine.
Previously supported only on the Blaze and
Hive engines.

DEC_BASE64 Decodes a base 64 encoded value and Supported on the Spark engine.
returns a string with the binary data Previously supported only on the Blaze and
representation of the data. Hive engines.

ENC_BASE64 Encodes data by converting binary data to Supported on the Spark engine.
string data using Multipurpose Internet Mail Previously supported only on the Blaze and
Extensions (MIME) encoding. Hive engines.

MD5 Calculates the checksum of the input value. Supported on the Spark engine.
The function uses Message-Digest algorithm Previously supported only on the Blaze and
5 (MD5). Hive engines.

UUID4 Returns a randomly generated 16-byte binary Supported on the Spark engine without
value that complies with variant 4 of the restrictions.
UUID specification described in RFC 4122. Previously supported on the Blaze engine
without restrictions and on the Spark and Hive
engines with restrictions.

UUID_UNPARSE Converts a 16-byte binary value to a 36- Supported on the Spark engine without
character string representation as specified restrictions.
in RFC 4122. Previously supported on the Blaze engine
without restrictions and on the Spark and Hive
engines with restrictions.

Hadoop Configuration Manager


Effective in version 10.1.1, the Big Data Management Configuration Utility has the following changes:

• The utility is renamed to the Hadoop Configuration Manager.

Big Data 123


• The Hadoop Configuration Manager supports configuring Big Data Management on Azure HDInsight
clusters in addition to other Hadoop clusters.
For more information about the Hadoop Configuration Manager, see the Informatica Big Data Management
10.1.1 Installation and Configuration Guide.

Business Glossary
This section describes the changes to Business Glossary in version 10.1.1

Export File Restriction


Effective in version 10.1.1, the Business Glossary export in the Analyst tool and command line has the
following changed behavior:

Truncation of characters in a Microsoft Excel export file cell

When you export Glossary assets that contain more than 32,767 characters in one Microsoft Excel cell,
the Analyst tool automatically truncates the characters in the cell to a value lesser than 32,763.

Microsoft Excel supports only up to 32,767 characters in a cell. Previously, when you exported a
glossary, Microsoft Excel truncated long text properties that contained more than 32,767 characters in a
cell, causing loss of data without any warning.

For more information about Export and Import, see the "Glossary Administration" chapter in the
Informatica 10.1.1 Business Glossary Guide.

Data Integration Service


This section describes changes to the Data Integration Service in version 10.1.1.

Execution Options in the Data Integration Properties


Effective in version 10.1.1, you no longer need to restart the Data Integration Service when you edit the
following Data Integration Services properties:

• Cache Directory
• Home Directory
• Maximum Parallelism
• Rejected Files Directory
• Source Directory
• State Store
• Target Directory
• Temporary Directories

Previously, you had to restart the Data Integration Service when you edited these properties.

124 Chapter 10: Changes (10.1.1)


Data Types
This section describes changes to data types in version 10.1.1.

Informatica Data Types


This section describes changes to transformation data types in the Developer tool.

Double Data Type


Effective in version 10.1.1, you can edit the precision and scale for double data types. The scale must be less
than or equal to the precision.

Previously, the precision was set to 15 and the scale was set to 0.

For more information, see the "Data Type Reference" appendix in the Informatica 10.1.1 Developer Tool Guide.

Informatica Analyst
This section describes changes to the Analyst tool in version 10.1.1.

Profiles
This section describes new Analyst tool features for profiles.

Run-time Environment
Effective in version 10.1.1, after you choose the Hive option as the run-time environment, select a Hadoop
connection to run the profiles.

Previously, after you choose the Hive option as the run-time environment, you selected a Hive connection to
run the profiles.

For more information about run-time environment, see the "Column Profiles in Informatica Analyst" chapter in
the Informatica 10.1.1 Data Discovery Guide.

Informatica Developer
This section describes changes to the Developer tool in version 10.1.1.

Profiles
This section describes new Developer tool features for profiles.

Run-time Environment
Effective in version 10.1.1, after you choose the Hive option as the run-time environment, select a Hadoop
connection to run the profiles.

Data Types 125


Previously, after you choose the Hive option as the run-time environment, you selected a Hive connection to
run the profiles.

For more information about run-time environment, see the "Data Object Profiles" chapter in the Informatica
10.1.1 Data Discovery Guide.

Mappings
This section describes changes to mappings in version 10.1.1.

Informatica Mappings
This section describes the changes to the Informatica mappings in version 10.1.1.

Reorder Generated Ports in a Dynamic Port


Effective in version 10.1.1, you can change the order of generated ports based on the following options:

• The order of ports in the group or dynamic port of the upstream transformation.
• The order of input rules for the dynamic port.
• The order of ports in the nearest transformation with static ports.
Default is to reorder based on the ports in the upstream transformation.

Previously, you could reorder generated ports based on the order of input rules for the dynamic port.

For more information, see the "Dynamic Mappings" chapter in the Informatica 10.1.1 Developer Mapping
Guide.

Enterprise information Catalog


This section describes changes to Enterprise Information Catalog in version 10.1.1.

HDFS Scanner Enhancement


Effective in version 10.1.1, you can extract metadata from flat file types using the HDFS resource scanner.

Relationships View
Effective in version 10.1.1, you can view business terms, related glossary assets, related technical assets,
and similar columns for the selected asset.

Previously, you could view asset relationships such as columns, data domains, tables, and views.

For more information about relationships view, see the "View Relationships" chapter in the Informatica 10.1.1
Enterprise Information Catalog User Guide.

126 Chapter 10: Changes (10.1.1)


Metadata Manager
This section describes changes to Metadata Manager in version 10.1.1.

Cloudera Navigator Resources


Effective in version 10.1.1, Cloudera Navigator resources have the following behavior changes:

Incremental loading changes

Incremental loading for Cloudera Navigator resources is disabled by default. Previously, incremental
loading was enabled by default.

When incremental loading is enabled, Metadata Manager performs a full metadata load when the
Cloudera administrator invokes a purge operation in Cloudera Navigator after the last successful
metadata load.

Additionally, there are new guidelines that explain when you might want to disable incremental loading.

Search query changes

You can use the search query to exclude entity types besides HDFS entities from the metadata load. For
example, you can use the search query to exclude YARN or Oozie job executions.

Data lineage changes

To reduce complexity of the data lineage diagram, Metadata Manager has the following changes:

• Metadata Manager no longer displays data lineage for Hive query template parts. You can run data
lineage analysis on Hive query templates instead.
• For partitioned Hive tables, Metadata Manager displays data lineage links between each column in
the table and the parent directory that contains the related HDFS entities. Previously, Metadata
Manager displayed a data lineage link between each column and each related HDFS entity.

For more information about Cloudera Navigator resources, see the "Database Management Resources"
chapter in the Informatica 10.1.1 Metadata Manager Administrator Guide.

Netezza Resources
Effective in version 10.1.1, Metadata Manager supports multiple schemas for Netezza resources.

Netezza resources have the following behavior changes:

• When you create or edit a Netezza resource, you select the schemas from which to extract metadata. You
can select one or multiple schemas.
• Metadata Manager organizes Netezza objects in the metadata catalog by schema. The database does not
appear in the metadata catalog.
• When you configure connection assignments to Netezza, you select the schema to which you want to
assign the connection.
Because of these changes, Netezza resources behave like other types of relational resources.

Previously, when you created or edited a Netezza resource, you could not select the schemas from which to
extract metadata. If you created a resource from a Netezza database that included multiple schemas,
Metadata Manager ignored the schema information. Metadata Manager organized Netezza objects in the
metadata catalog by database. When you configured connection assignments to Netezza, you selected the
database to which to assign the connection.

Metadata Manager 127


For more information about Netezza resources, see the "Database Management Resources" chapter in the
Informatica 10.1.1 Metadata Manager Administrator Guide.

PowerExchange Adapters
This section describes changes to PowerExchange adapters in version 10.1.1.

PowerExchange Adapters for Informatica


This section describes changes to Informatica adapters in version 10.1.1.

PowerExchange for Hive


Effective in version 10.1.1, PowerExchange for Hive has the following connection modes for Hive Connection:

• Access Hive as a source or target


• Use Hive to run mappings in Hadoop cluster

Previously, the connection modes were:

• Access HiveServer2 to run mappings


• Access Hive CLI to run mappings

For more information, see the Informatica 10.1.1 PowerExchange for Hive User Guide.

PowerExchange for Tableau


Effective in version 10.1.1, PowerExchange for Tableau has the following changes:

• PowerExchange for Tableau installs with Informatica 10.1.1.


Previously, PowerExchange for Tableau had a separate installer.
• When you configure a target operation to publish a Tableau Data Extract (TDE) file, you can use the
append operation in the advanced properties to add data to an existing TDE file in Tableau Server and
Tableau Online.
Previously, you could configure the append operation to publish the TDE file only to Tableau Desktop.

For more information, see the Informatica 10.1.1 PowerExchange for Tableau User Guide.

PowerExchange Adapters for PowerCenter


This section describes changes to PowerCenter adapters in version 10.1.1.

PowerExchange for Essbase


Effective in version 10.1.1, PowerExchange for Essbase installs with PowerCenter.

Previously, PowerExchange for Essbase had a separate installer.

For more information, see the Informatica 10.1.1 PowerExchange for Essbase User Guide for PowerCenter.

PowerExchange for Greenplum


Effective in version 10.1.1, PowerExchange for Greenplum installs with PowerCenter.

Previously, PowerExchange for Greenplum had a separate installer.

For more information, see the Informatica 10.1.1 PowerExchange for Greenplum User Guide for PowerCenter.

128 Chapter 10: Changes (10.1.1)


PowerExchange for Microsoft Dynamics CRM
Effective in version 10.1.1, PowerExchange for Microsoft Dynamics CRM installs with PowerCenter.

Previously, PowerExchange for Microsoft Dynamics CRM had a separate installer.

For more information, see the Informatica 10.1.1 PowerExchange for Microsoft Dynamics CRM User Guide for
PowerCenter.

PowerExchange for Tableau


Effective in version 10.1.1, PowerExchange for Tableau has the following changes:

• PowerExchange for Tableau installs with PowerCenter.


Previously, PowerExchange for Tableau had a separate installer.
• When you configure a target operation to publish a Tableau Data Extract (TDE) file, you can configure the
append operation in the session properties to add data to an existing TDE file in Tableau Server and
Tableau Online.
Previously, you could configure the append operation to publish the TDE file only to Tableau Desktop.

For more information, see the Informatica 10.1.1 PowerExchange for Tableau User Guide for PowerCenter.

Transformations
This section describes changed transformation behavior in version 10.1.1.

InformaticaTransformations
This section describes the changes to the Informatica transformations in version 10.1.1.

Address Validator Transformation


Effective in version 10.1.1, the Address Validator transformation uses version 5.9.0 of the Informatica
Address Verification software engine. The engine enables the features that Informatica adds to the Address
Validator transformation in version 10.1.1.

Previously, the transformation used version 5.8.1 of the engine.

For more information, see the Informatica 10.1.1 Developer Transformation Guide and the Informatica 10.1.1
Address Validator Port Reference.

Workflows
This section describes changed workflow behavior in version 10.1.1.

Informatica Workflows
This section describes the changes to Informatica workflow behavior in version 10.1.1.

Transformations 129
Nested Inclusive Gateways
Effective in version 10.1.1, you can add one or more pairs of gateways to a sequence flow between two
Inclusive gateways or two Exclusive gateways.

Previously, you invalidated the workflow if you added a pair of gateways to a sequence flow between two
Inclusive gateways.

For more information, see the Informatica 10.1.1 Developer Workflow Guide.

Documentation
This section describes documentation changes in version 10.1.1.

Metadata Manager Documentation


Effective in version 10.1.1, the Informatica Metadata Manager Repository Reports Reference is obsolete
because Informatica dropped support for the Reporting and Dashboards Service and for JasperReports
Server.

PowerExchange for SAP NetWeaver Documentation


Effective in version 10.1.1, the following guides are obsolete because Informatica dropped support for the
Analytic Business Components functionality:

• Informatica PowerExchange for SAP NetWeaver Analytic Business Components Guide


• Informatica PowerExchange for SAP NetWeaver Analytic Business Components Transport Version
Installation Notice

130 Chapter 10: Changes (10.1.1)


Chapter 11

Release Tasks (10.1.1)


This chapter includes the following topic:

• Metadata Manager, 131

Metadata Manager
This section describes release tasks for Metadata Manager in version 10.1.1.

Business Intelligence Resources


Effective in version 10.1.1, the Worker Threads configuration property for some Business Intelligence
resources is replaced with the Multiple Threads configuration property. If you set the Worker Threads
property in the previous version of Metadata Manager, set the Multiple Threads property to the same value
after you upgrade.

Update the value of the Multiple Threads property for the following resources:

• Business Objects
• Cognos
• Oracle Business Intelligence Enterprise Edition
• Tableau
The Multiple Threads configuration property controls the number of worker threads that the Metadata
Manager Agent uses to extract metadata asynchronously. If you do not update the Multiple Threads property
after upgrade, the Metadata Manager Agent calculates the number of worker threads. The Metadata Manager
Agent allocates between one and six threads based on the JVM architecture and the number of available CPU
cores on the machine that runs the Metadata Manager Agent.

For more information about the Multiple Threads configuration property, see the "Business Intelligence
Resources" chapter in the Informatica 10.1.1 Metadata Manager Administrator Guide.

Cloudera Navigator Resources


Effective in version 10.1, you must configure the Java heap size for the Cloudera Navigator server and the
maximum heap size for the Metadata Manager Service. If you do not correctly configure the heap sizes, the
metadata load can fail.

Set the Java heap size for the Cloudera Navigator Server to at least 2 GB. If the heap size is not sufficient, the
resource load fails with a connection refused error.

131
Set the maximum heap size for the Metadata Manager Service to at least 4 GB. If you perform simultaneous
resource loads, increase the maximum heap size by at least 1 GB for each resource load. For example, to
load two Cloudera Navigator resources simultaneously, increase the maximum heap size by 2 GB. Therefore,
you would set the Max Heap Size property for the Metadata Manager Service to at least 6144 MB (6 GB). If
the maximum heap size is not sufficient, the load fails with an out of memory error.

For more information about Cloudera Navigator resources, see the "Database Management Resources"
chapter in the Informatica 10.1.1 Metadata Manager Administrator Guide.

Tableau Resources
Effective in version 10.1.1, the Tableau model has minor changes. Therefore, you must purge and reload
Tableau resources after you upgrade.

For more information about Tableau resources, see the "Business Intelligence Resources" chapter in the
Informatica 10.1.1 Metadata Manager Administrator Guide.

132 Chapter 11: Release Tasks (10.1.1)


Part III: Version 10.1
This part contains the following chapters:

• New Products (10.1), 134


• New Features (10.1), 138
• Changes (10.1), 162
• Release Tasks (10.1), 171

133
Chapter 12

New Products (10.1)


This chapter includes the following topics:

• Intelligent Data Lake, 134


• PowerExchange Adapters, 137

Intelligent Data Lake


With the advent of big data technologies, many organizations are adopting a new information storage model
called data lake to solve data management challenges. The data lake model is being adopted for diverse use
cases, such as business intelligence, analytics, regulatory compliance, and fraud detection.

A data lake is a shared repository of raw and enterprise data from a variety of sources. It is often built over a
distributed Hadoop cluster, which provides an economical and scalable persistence and compute layer.
Hadoop makes it possible to store large volumes of structured and unstructured data from various enterprise
systems within and outside the organization. Data in the lake can include raw and refined data, master data
and transactional data, log files, and machine data.

Organizations are also looking to provide ways for different kinds of users to access and work with all of the
data in the enterprise, within the Hadoop data lake as well data outside the data lake. They want data
analysts and data scientists to be able to use the data lake for ad-hoc self-service analytics to drive business
innovation, without exposing the complexity of underlying technologies or the need for coding skills. IT and
data governance staff want to monitor data related user activities in the enterprise. Without strong data
management and governance foundation enabled by intelligence, data lakes can turn into data swamps.

In version 10.1, Informatica introduces Intelligent Data Lake, a new product to help customers derive more
value from their Hadoop-based data lake and make data available to all users in the organization.

Intelligent Data Lake is a collaborative self-service big data discovery and preparation solution for data
analysts and data scientists. It enables analysts to rapidly discover and turn raw data into insight and allows
IT to ensure quality, visibility, and governance. With Intelligent Data Lake, analysts to spend more time on
analysis and less time on finding and preparing data.

Intelligent Data Lake provides the following benefits:

• Data analysts can quickly and easily find and explore trusted data assets within the data lake and outside
the data lake using semantic search and smart recommendations.
• Data analysts can transform, cleanse, and enrich data in the data lake using an Excel-like spreadsheet
interface in a self-service manner without the need for coding skills.
• Data analysts can publish data and share knowledge with the rest of the community and analyze the data
using their choice of BI or analytic tools.

134
• IT and governance staff can monitor user activity related to data usage in the lake.
• IT can track data lineage to verify that data is coming from the right sources and going to the right
targets.
• IT can enforce appropriate security and governance on the data lake
• IT can operationalize the work done by data analysts into a data delivery process that can be repeated and
scheduled.

Intelligent Data Lake has the following features:


Search

• Find the data in the lake as well as in the other enterprise systems using smart search and inference-
based results.
• Filter assets based on dynamic facets using system attributes and custom defined classifications.

Explore

• Get an overview of assets, including custom attributes, profiling statistics for data quality, data
domains for business content, and usage information.
• Add business context information by crowd-sourcing metadata enrichment and tagging.
• Preview sample data to get a sense of the data asset based on user credentials.
• Get lineage of assets to understand where data is coming from and where it is going and to build
trust in the data.
• Know how the data asset is related to other assets in the enterprise based on associations with other
tables or views, users, reports and data domains.
• Progressively discover additional assets with lineage and relationship views.

Acquire

• Upload personal delimited files to the lake using a wizard-based interface.


Hive tables are automatically created for the uploads in the most optimal format.
• Create, append to, or overwrite assets for uploaded data.

Collaborate

• Organize work by adding data assets to projects.


• Add collaborators to projects with different roles, such as co-owner, editor, or viewer, and with
different privileges.

Recommendations

• Improve productivity by using recommendations based on the behavior and shared knowledge of
other users.
• Get recommendations for alternate assets that can be used in a project.
• Get recommendations for additional assets that can be used a project.
• Recommendations change based on what is in the project.

Prepare

• Use excel-like environment to interactively specify transformation using sample data.


• See sheet-level and column-level overviews, including value distributions and numeric and date
distributions.
• Add transformations in the form of recipe steps and see the results immediately on the sheets.

Intelligent Data Lake 135


• Perform column-level data cleansing and data transformation using string, math, date, logical
operations.
• Perform sheet-level operations to combine, merge, aggregate, or filter data.
• Refresh the sample in the worksheet if the data in the underlying tables change.
• Derive sheets from existing sheets and get alerts when parent sheets change.
• All transformation steps are stored in the recipe which can be played back interactively.

Publish

• Use the power of the underlying Hadoop system to run large-scale data transformation without
coding or scripting.
• Run data preparation steps on actual large data sets in the lake to create new data assets.
• Publish the data in the lake as a Hive table in the desired database.
• Create, append, or overwrite assets for published data.

Data Asset Operations

• Export data from the lake to a CSV file.


• Copy data into another database or table.
• Delete the data asset if allowed by user credentials.
My Activities

• Keep track of upload activities and their status.


• Keep track of publications and their status.
• View log files in case of errors and share with IT administrators if needed.

IT Monitoring

• Keep track of user, data asset and project activities by building reports on top of the audit database.
• Find information such as the top active users, the top datasets by size, prior updates, most reused
assets, and the most active projects.

IT Operationalization

• Operationalize the ad-hoc work done by analysts.


• User Informatica Developer to customize and optimize the Informatica Big Data Management
mappings translated from the recipes that analysts create.
• Deploy, schedule, and monitor the Informatica Big Data Management mappings to ensure that data
assets are delivered at the right time to the right destinations.
• Make sure that the entitlements for access to various databases and tables in the data lake are
according to security policies.

136 Chapter 12: New Products (10.1)


PowerExchange Adapters

PowerExchange Adapters for Informatica


This section describes new Informatica adapters in version 10.1.

PowerExchange for Amazon Redshift


Effective in version 10.1, you can use PowerExchange for Amazon Redshift to read data from and write data
to Amazon Redshift. You can import Amazon Redshift business entities as read and write data objects to
create and run mappings to extract data from or load data to an Amazon Redshift entity.

For more information, see the Informatica PowerExchange for Amazon Redshift 10.1 User Guide.

PowerExchange for Microsoft Azure Blob Storage


Effective in version 10.1, you can use PowerExchange for Microsoft Azure Blob Storage to read data from
and write data to Microsoft Azure Blob Storage. You can create a Microsoft Azure Blob Storage connection to
read or write Microsoft Azure Blob Storage data into a Microsoft Azure Blob Storage data object. You can
validate and run mappings in native and Hadoop environments.

For more information, see the Informatica PowerExchange for Microsoft Azure Blob Storage 10.1 User Guide.

PowerExchange for Microsoft Azure SQL Data Warehouse


Effective in version 10.1, you can use PowerExchange for Microsoft Azure SQL Data Warehouse to read data
from and write data to Microsoft Azure SQL Data Warehouse. You can validate and run mappings in native
and Hadoop environments.

For more information, see the Informatica PowerExchange for Microsoft Azure SQL Data Warehouse 10.1 User
Guide.

PowerExchange Adapters 137


Chapter 13

New Features (10.1)


This chapter includes the following topics:

• Application Services, 138


• Big Data, 139
• Business Glossary, 141
• Connectivity, 142
• Command Line Programs , 142
• Documentation, 147
• Exception Management, 147
• Informatica Administrator, 148
• Informatica Analyst, 149
• Informatica Developer, 150
• Informatica Development Platform, 152
• Live Data Map, 153
• Mappings, 154
• Metadata Manager, 154
• PowerCenter, 157
• PowerExchange Adapters, 157
• Security, 158
• Transformations, 159
• Workflows, 160

Application Services
This section describes new application services features in version 10.1.

138
System Services
This section describes new system service features in version 10.1.

Scheduler Service for Profiles and Scorecards


Effective in version 10.1, you can use the Scheduler Service to schedule profile runs and scorecard runs to
run at a specific time or intervals.

For more information about schedules, see the "Schedules" chapter in the Informatica 10.1 Administrator
Guide.

Set the Time Zone for a Schedule


Effective in version 10.1, when you choose a date and time to run a schedule, you also choose the time zone.
When you set the time zone, you ensure that the job runs at the time you expect it to run, no matter where the
Data Integration Service is running.

For more information about schedules, see the "Schedules" chapter in the Informatica 10.1 Administrator
Guide.

Big Data
This section describes new big data features in version 10.1.

Hadoop Ecosystem
Support in Big Data Management 10.1
Effective in version 10.1, Informatica supports the following updated versions of Hadoop distrbutions:

• Azure HDInsight 3.3


• Cloudera CDH 5.5
• MapR 5.1

For the full list of Hadoop distributions that Big Data Management 10.1 supports, see the Informatica Big
Data Management 10.1 Installation and Configuration Guide.

Hadoop Security Systems


Effective in version 10.1, Informatica supports the following security systems on the Hadoop ecosystem:

• Apache Knox
• Apache Ranger
• Apache Sentry
• HDFS Transparent Encryption
Limitations apply to some combinations of security system and Hadoop distribution platform. For more
information on Informatica support for these technologies, see the Informatica Big Data Management 10.1
Security Guide.

Big Data 139


Spark Runtime Engine
Effective in version 10.1, you can push mappings to the Apache Spark engine in the Hadoop environment.

Spark is an Apache project with a run-time engine that can run mappings on the Hadoop cluster. Configure
the Hadoop connection properties specific to the Spark engine. After you create the mapping, you can
validate it and view the execution plan in the same way as the Blaze and Hive engines.

When you push mapping logic to the Spark engine, the Data Integration Service generates a Scala program
and packages it into an application. It sends the application to the Spark executor that submits it to the
Resource Manager on the Hadoop cluster. The Resource Manager identifies resources to run the application.
You can monitor the job in the Administrator tool.

For more information about using Spark to run mappings, see the Informatica Big Data Management 10.1 User
Guide.

Sqoop Connectivity for Relational Sources and Targets


Effective in version 10.1, you can use Sqoop to process data between relational databases and HDFS through
MapReduce programs. You can use Sqoop to import and export data. When you use Sqoop, you do not need
to install the relational database client and software on any node in the Hadoop cluster.

To use Sqoop, you must configure Sqoop properties in a JDBC connection and run the mapping in the
Hadoop environment. You can configure Sqoop connectivity for relational data objects, customized data
objects, and logical data objects that are based on a JDBC-compliant database. For example, you can
configure Sqoop connectivity for the following databases:

• Aurora
• IBM DB2
• IBM DB2 for z/OS
• Greenplum
• Microsoft SQL Server
• Netezza
• Oracle
• Teradata
You can also run a profile on data objects that use Sqoop in the Hive run-time environment.

For more information, see the Informatica 10.1 Big Data Management User Guide.

Transformation Support on the Blaze Engine


Effective in version 10.1, the following transformations are supported on the Blaze engine:

• Address Validator
• Case Converter
• Comparison
• Consolidation
• Data Processor
• Decision
• Key Generator
• Labeler

140 Chapter 13: New Features (10.1)


• Match
• Merge
• Normalizer
• Parser
• Sequence Generator
• Standardizer
• Weighted Average
The Address Validator, Consolidation, Data Processor, Match, and Sequence Generator transformations are
supported with restrictions.

Effective in version 10.1, the following transformations have additional support on the Blaze engine:

• Aggregator. Supports pass-through ports.


• Lookup. Supports unconnected Lookup transformation.
For more information, see the "Mapping Objects in a Hadoop Environment" chapter in the Informatica Big
Data Management 10.1 User Guide.

Business Glossary
This section describes new Business Glossary features in version 10.1.

Inherit Glossary Content Managers to All Assets


Effective in version 10.1, the Analyst tool assigns the data steward and owner that you assign to a glossary
to all the assets in the glossary.

For more information, see the "Glossary Content Management" chapter in the Informatica 10.1 Business
Glossary Guide.

Bi-directional Custom Relationships


Effective in version 10.1, you can create bi-directional custom relationships. You can view the direction of
related assets in the relationship view diagram. In a bi-directional custom relationship, you provide the name
for the relationships in both directions.

For more information, see the "Finding Glossary Content" chapter in the Informatica 10.1 Business Glossary
Guide.

Custom Colors in the Relationship View Diagram


Effective in version 10.1, you can define the color of the line that connects related assets in the relationship
view diagram.

For more information, see the "Glossary Administration" chapter in the Informatica 10.1 Business Glossary
Guide.

Business Glossary 141


Connectivity
This section describes new connectivity features in version 10.1.

Schema Names in IBM DB2 Connections


Effective in version 10.1, when you use an IBM DB2 connection to import a table in the Developer tool or the
Analyst tool, you can specify one or more schema names from which you want to import the table. Use the
ischemaname attribute in the metadata connection string URL to specify the schema names. Use the pipe (|)
character to separate multiple schema names.

For example, enter the following syntax in the metadata connection string URL:

jdbc:informati[Link]//<host name>:<port>;DatabaseName=<database
name>;ischemaname=<schema_name1>|<schema_name2>|<schema_name3>

This feature is also available in 9.6.1 HotFix 4. It is not available in 10.0.

For more information, see the Informatica 10.1 Developer Tool Guide and Informatica 10.1 Analyst Tool Guide.

Command Line Programs


This section describes new commands in version 10.1.

infacmd bg Commands
The following table describes new infacmd bg commands:

Command Description

listGlossary Lists the business glossaries in the Analyst tool.

exportGlossary Exports the business glossaries available in the Analyst tool.

importGlossary Imports business glossaries from .xlsx or .zip files that were exported from the Analyst tool.

infacmd dis Commands


The following table describes the new infacmd dis commands:

Command Description

ListApplicationPermissions Lists the permissions that a user or group has for an application.

ListApplicationObjectPermissions Lists the permissions that a user or group has for an application object such as
mapping or workflow.

SetApplicationPermissions Assigns permissions on an application to a user or a group.

SetApplicationObjectPermissions Assigns permissions on an application object such as mapping or workflow to a


user or a group.

142 Chapter 13: New Features (10.1)


For more information, see the "infacmd dis Command Reference" chapter in the Informatica 10.1 Command
Reference.

infacmd ihs Commands


The following table describes new infacmd ihs commands:

Command Description

BackupData Backs up HDFS data in the internal Hadoop cluster to a .zip file.

UpgradeClusterService Upgrades the Informatica Cluster Service configuration.

removeSnapshot Removes existing HDFS snapshots so that you can run the infacmd ihs BackupData
command successfully to back up HDFS data.

For more information, see the "infacmd ihs Command Reference" chapter in the Informatica 10.1 Command
Reference.

infacmd isp Commands


The following table describes the new infacmd isp commands:

Command Description

AssignDefaultOSProfile Assigns a default operating system profile to a user or group.

ListDefaultOSProfiles Lists the default operating system profiles for a user or group.

ListDomainCiphers Displays one or more of the following cipher suite lists used by the Informatica domain or
a gateway node:
Black list

User-specified list of cipher suites that the Informatica domain blocks.

Default list

List of cipher suites that Informatica supports by default.

Effective list
The list of cipher suites that the Informatica domain uses after you configure it with
the infasetup updateDomainCiphers command. The effective list supports cipher
suites in the default list and white list but blocks cipher suites in the black list.

White list

User-specified list of cipher suites that the Informatica domain can use in addition to
the default list.
You can specify which lists that you want to display.

UnassignDefaultOSProfile Removes the default operating system profile that is assigned to a user or group.

Command Line Programs 143


The following table describes updated options for infacmd isp commands:

Command Description

CreateOSProfile The following options are added:


- -DISProcessVariables
- -DISEnvironmentVariables
- -HadoopImpersonationUser
- -HadoopImpersonationProperties
- -UseLoggedInUserAsProxy
- -ProductExtensionName
- -ProductOptions
Use these options to configure the operating system profile properties for the Data Integration
Service.

UpdateOSProfile The following options are added:


- -DISProcessVariables
- -DISEnvironmentVariables
- -HadoopImpersonationUser
- -HadoopImpersonationProperties
- -UseLoggedInUserAsProxy
- -ProductExtensionName
- -ProductOptions
Use these options to configure the operating system profile properties for the Data Integration
Service.

For more information, see the "infacmd isp Command Reference" chapter in the Informatica 10.1 Command
Reference.

infacmd ldm Commands


The following table describes new infacmd ldm commands:

Command Description

backupData Takes a snapshot of the HDFS directory and creates a .zip file of the snapshot in the local
machine.

restoreData Retrieves the HDFS data backup .zip file from the local system and restores data in the HDFS
directory.

removeSnapshot Removes the snapshot from the HDFS directory.

upgrade Upgrades the Catalog Service.

For more information, see the "infacmd ldm Command Reference" chapter in the Informatica 10.1 Command
Reference.

144 Chapter 13: New Features (10.1)


infacmd ms Commands
The following table describes new options for infacmd ms commands:

Command Description

RunMapping The command contains the following new option:


- -osp. The operating system profile name if the Data Integration Service is enabled to use operating
system profiles.

For more information, see the "infacmd ms Command Reference" chapter in the Informatica 10.1 Command
Reference.

infacmd ps Commands
The following table describes new options for infacmd ps commands:

Command Description

- Execute The commands contain the following new option:


- executeProfile - -ospn. The operating system profile name if the Data Integration Service is enabled to use
operating system profiles.

For more information, see the "infacmd ps Command Reference" chapter in the Informatica 10.1 Command
Reference.

infacmd sch Commands


The following table describes updated options for infacmd sch commands:

Command Description

CreateSchedule The following argument is added to the -RunnableObjects option:


- -osProfileName. The operating system profile name if the Data Integration Service is enabled to
use operating system profiles.

UpdateSchedule The following argument is added to the -AddRunnableObjects option:


- -osProfileName. The operating system profile name if the Data Integration Service is enabled to
use operating system profiles.

For more information, see the "infacmd sch Command Reference" chapter in the Informatica 10.1 Command
Reference.

Command Line Programs 145


infasetup Commands
The following table describes new infasetup commands:

Command Description

ListDomainCiphers Displays one or more of the following cipher suite lists used by the Informatica domain or a
gateway node uses:
Black list

User-specified list of cipher suites that the Informatica domain blocks.

Default list

List of cipher suites that Informatica supports by default.

Effective list

The list of cipher suites that the Informatica domain uses after you configure it with the
infasetup updateDomainCiphers command. The effective list supports cipher suites in the
default list and white list but blocks cipher suites in the black list.

White list

User-specified list of cipher suites that the Informatica domain can use.
You can specify which lists that you want to display.

updateDomainCiphers Updates the cipher suites that the Informatica domain can use with a new effective list.

The following table describes updated options for infasetup commands:

Command Description

- DefineDomain The commands contain the following new options:


- DefineGatewayNode - cipherWhiteList |-cwl
- DefineWorkerNode - cipherWhiteListFile |-cwlf
- UpdateGatewayNode - cipherBlackList |-cbl
- UpdateWorkerNode - cipherBlackListFile |-cblf
Use these options to configure cipher suites for an Informatica domain that uses secure
communication within the domain or secure connections to web application services.

For more information, see the "infasetup Command Reference" chapter in the Informatica 10.1 Command
Reference.

pmrep Commands
The following table describes a new pmrep command:

Command Description

AssignIntegrationService Assigns the PowerCenter Integration Service to the specified workflow.

146 Chapter 13: New Features (10.1)


The following table describes the updated option for a pmrep command:

Command Description

CreateConnection The command contains the following updated option:


- -s. The connection type list includes FTP.

For more information, see the "pmrep Command Reference" chapter in the Informatica 10.1 Command
Reference.

Documentation
This section describes new or updated guides with the Informatica documentation in version 10.1.

The Informatica documentation contains the following new guides:

Metadata Manager Command Reference

Effective in version 10.1, the Metadata Manager Command Reference contains information about all of
the Metadata Manager command line programs. The Metadata Manager Command Reference is included
in the online help for Metadata Manager. Previously, information about the Metadata Manager command
line programs was included in the Metadata Manager Administrator Guide.

For more information, see the Informatica 10.1 Metadata Manager Command Reference.

Informatica Administrator Reference for Live Data Map®

Effective in Live Data Map version 2.0, the Informatica Administrator Reference for Live Data Map
contains basic reference information on Informatica Administrator tasks that you need to perform in Live
Data Map. The Informatica Administrator Reference for Live Data Map is included in the online help for
Informatica Administrator.

For more information, see the Informatica 2.0 Administrator Reference for Live Data Map.

Exception Management
This section describes new exception management features in version 10.1.

Search and replace data values by data type

Effective in version 10.1, you can configure the options in an exception task to search and replace data
values based on the data type. You can configure the options to search and replace data in any column
that contains date, string, or numeric data.

When you specify a data type, the Analyst tool searches for the value that you enter in any column that
uses the data type. You can find and replace any value that a string data column contains. You can
perform case-sensitive searches on string data. You can search for a partial match or a complete match
between the search value and the contents of a field in a string data column.

This feature is also available in 9.6.1 HotFix 4. It is not available in 10.0.

For more information, see the Exception Records chapter in the Informatica 10.1 Exception Management
Guide.

Documentation 147
Informatica Administrator
This section describes new Administrator tool features in version 10.1.

Domain View
Effective in 10.1, you can view historical statistics for CPU usage and memory usage in the domain.

You can view the CPU and memory statistics for usage for the last 60 minutes. You can toggle between the
current statistics and the last 60 minutes. In the Domain view choose Actions > Current or Actions > Last
Hour Trend in the CPU Usage panel or the Memory Usage panel.

Monitoring
Effective in version 10.1, the Monitor tab in the Administrator tool has the following features:

Details view on the Summary Statistics view

The Summary Statistics view has a Details view. You can view information about jobs, export the list to
a .csv file, and link to a job in the Execution Statistics view. To access the Details view, click View
Details.

The following image shows the Details view:

Historical Statistics view.

When you select an Ad Hoc or a deployed mapping job in the Contents panel of the Monitor tab, the
Details panel contains the Historical Statistics view. The Historical Statistics view shows averaged data
from multiple runs for a specific job. For example, you can view the minimum, maximum, and average
duration of the mapping job. You can view the average amount of CPU that the job consumes when it
runs.

148 Chapter 13: New Features (10.1)


The following image shows the Historical Statistics view:

Informatica Analyst
This section describes new Analyst tool features in version 10.1.

Profiles
This section describes new Analyst tool features for profiles and scorecards.

Conformance Criteria
Effective in version 10.1, you can select a minimum number of conforming rows as conformance criteria for
data domain discovery.

For more information about conformance criteria, see the "Data Domain Discovery in Informatica Analyst"
chapter in the Informatica 10.1 Data Discovery Guide.

Exclude Nulls for Data Domain Discovery


Effective in version 10.1, you can exclude null values from the data set when you perform data domain
discovery on a data source. When you select the minimum percentage of rows with the exclude null values
option, the conformance percentage is the ratio of number of matching rows divided the total number of
rows excluding the null values in the column.

For more information about exclude null values from data domain discovery option, see the "Data Domain
Discovery in Informatica Analyst" chapter in the Informatica 10.1 Data Discovery Guide.

Informatica Analyst 149


Run-time Environment
Effective in version 10.1, you can choose the Hadoop option as the run-time environment when you create or
edit a column profile, data domain discovery profile, enterprise discovery profile, or scorecard. When you
choose the Hadoop option, the Data Integration Service pushes the profile logic to the Blaze engine on the
Hadoop cluster to run profiles.

For more information about run-time environment, see the "Data Object Profiles" chapter in the Informatica
10.1 Data Discovery Guide.

Scorecard Dashboard
Effective in version 10.1, you can view the following scorecard details in the scorecard dashboard:

• Total number of scorecards in the projects


• Scorecard run trend for the past six months
• Total number of data objects and the number of data objects that have scorecards
• Cumulative metrics trend for the past six months
For more information about scorecard dashboard, see the "Scorecards in Informatica Analyst" chapter in the
Informatica 10.1 Data Discovery Guide.

Informatica Developer
This section describes new Informatica Developer features in version 10.1.

Generate Source File Name


Effective in 10.1, you can use the file name column option to return the source file name. You can configure
the mapping to write the source file name to each source row.

For more information, see the Informatica 10.1 Developer Tool Guide.

Import from PowerCenter


Effective in version 10.1, you can import mappings that contain Netezza and Teradata objects from
PowerCenter into the Developer tool and run the mappings in a native or Hadoop run-time environment.

For more information, see the Informatica 10.1 Developer Mapping Guide.

Copy Text Between Excel and the Developer Tool


Effective in version 10.1, you can copy text from Excel to the Developer tool or from the Developer tool to
Excel. Copy text from Excel to the Developer tool to provide metadata for transformations. For example, you
have designed a mapping in Excel that includes all transformations, their port names, data types, and
transformation logic. In the Developer tool, you can copy the fields from Excel into the ports of empty
transformations. Similarly, you can copy transformation ports from the Developer tool into Excel.

150 Chapter 13: New Features (10.1)


Logical Data Object Read and Write Mapping Editing
Effective in Informatica 10.1, you can use the logical data object editor to edit and change metadata in logical
data object Read and Write mappings. For more information, see the "Logical View of Data" chapter in the
Informatica 10.1 Developer Tool Guide.

DDL Query
Effective in version 10.1, when you choose to create or replace the target at run time, you can define a DDL
query based on which the Data Integration Service must create or replace the target table at run time. You
can define a DDL query for relational and Hive targets.

You can enter placeholders in the DDL query. The Data Integration Service substitutes the placeholders with
the actual values at run time. For example, if a table contains 50 columns, instead of entering all the column
names in the DDL query, you can enter a placeholder.

You can enter the following placeholders in the DDL query:

• INFA_TABLE_NAME
• INFA_COLUMN_LIST
• INFA_PORT_SELECTOR
You can also enter parameters in the DDL query.

For more information, see the Informatica 10.1 Developer Mapping Guide.

Profiles
This section describes new Developer tool features for profiles and scorecards.

Columns Profiles with Avro and Parquet Data Sources


Effective in version 10.1, you can create a column profile on an Avro or Parquet data source in HDFS.

For more information about column profiles on Avro and Parquet data sources, see the "Column Profiles on
Semi-structured Data Sources" chapter in the Informatica 10.1 Data Discovery Guide.

Conformance Criteria
Effective in version 10.1, you can select a minimum number of conforming rows as conformance criteria for
data domain discovery.

For more information about conformance criteria, see the "Data Domain Discovery in Informatica Developer"
chapter in the Informatica 10.1 Data Discovery Guide.

Exclude Nulls for Data Domain Discovery


Effective in version 10.1, you can exclude null values from the data set when you perform data domain
discovery on a data source. When you select the minimum percentage of rows with the exclude null values
option, the conformance percentage is the ratio of number of matching rows divided by the total number of
rows excluding the null values in the column.

For more information about exclude null values from data domain discovery option, see the "Data Domain
Discovery in Informatica Developer" chapter in the Informatica 10.1 Data Discovery Guide.

Run-time Environment
Effective in version 10.1, you can choose the Hadoop option as the run-time environment when you create or
edit a column profile, data domain discovery profile, enterprise discovery profile, or scorecard. When you

Informatica Developer 151


choose the Hadoop option, the Data Integration Service pushes the profile logic to the Blaze engine on the
Hadoop cluster to run profiles.

For more information about run-time environment, see the "Data Object Profiles" chapter in the Informatica
10.1 Data Discovery Guide.

Informatica Development Platform


This section describes new features and enhancements to the Informatica Development Platform.

Informatica Connector Toolkit


Effective in version 10.1, you can use the following features in the Informatica Connector Toolkit:

Pre-defined type system

When you create a connector that uses REST APIs to connect to the data source, you can use pre-
defined data types. You can use the following Informatica Platform data types:

• string
• integer
• bigInteger
• decimal
• double
• binary
• date

Procedure pattern

When you create a connector for Informatica Cloud, you can define native metadata objects for
procedures in data sources. You can use the following options to define the native metadata object for a
procedure:
Manually create the native metadata object

When you define the native metadata objects manually, you can specify the following details:

Metadata Component Description

Procedure extension Additional metadata information that you can specify for a procedure.

Parameter extension Additional metadata information that you can specify for parameters.

Call capability attributes Additional metadata information that you can specify to create a read or write
call to a procedure.

Use swagger specifications

When you use swagger specifications to define the native metadata object, you can either use an
existing swagger specification or you can generate a swagger specification by sampling the REST
end point.

Edit common metadata

You can specify common metadata information for Informatica Cloud connectors, such as schema
name and foreign key name.

152 Chapter 13: New Features (10.1)


Export the connector files for Informatica Cloud

After you design and implement the connector components, you can export the connector files for
Informatica Cloud by specifying the plug-in ID and plug-in version.

Export the connector files for PowerCenter

After you design and implement the connector components, you can export the connector files for
PowerCenter by specifying the PowerCenter version.

Live Data Map


This section describes new Live Data Map features in version 10.1.

Email Notifications
Effective in version 10.1, you can configure and receive email notifications on the Catalog Service status to
closely monitor and troubleshoot the application service issues. You use the Email Service and the
associated Model Repository Service to send email notifications.

For more information, see the Informatica 10.1 Administrator Reference for Live Data Map.

Keyword Search
Effective in version 10.1, you can use the following keywords to restrict the search results to specific types of
assets:

• Table
• Column
• File
• Report

For example, if you want to search for all the tables with the term "customer" in them, type in "tables with
customer" in the Search box. Enterprise Information Catalog lists all the tables that include the search term
"customer" in the table name.

For more information, see the Informatica 10.1 Enterprise Information Catalog User Guide.

Profiling
Effective in version 10.1, Live Data Map can run profiles in the Hadoop environment. When you choose the
Hadoop connection, the Data Integration Service pushes the profile logic to the Blaze engine on the Hadoop
cluster to run profiles.

For more information, see the Informatica 10.1 Live Data Map Administrator Guide.

Scanners
Effective in version 10.1, you can extract metadata from the following sources:

• Amazon Redshift
• Amazon S3

Live Data Map 153


• Custom Lineage
• HDFS
• Hive
• Informatica Cloud
• MicroStrategy
For more information, see the Informatica 10.1 Live Data Map Administrator Guide.

Mappings
This section describes new mapping features in version 10.1.

Informatica Mappings
This section describes new features for Informatica mappings in version 10.1.

Generate a Mapplet from Connected Transformations


Effective in version 10.1, you can generate a mapplet from a group of connected transformations in a
mapping. Use the mapplet as a template to add to multiple mappings that connect to different sources and
targets.

Generate a Mapping or Logical Data Object from an SQL Query


Effective in version 10.1, you can generate a mapping or a logical data object from an SQL query in the
Developer tool.

To generate a mapping or logical data object from an SQL query, click File > New > Mapping from SQL Query.
Enter a SQL query or select the location of the text file with an SQL query that you want to convert to a
mapping. You can also generate a logical data object from an SQL query that contains only SELECT
statements.

For more information about generating a mapping or a logical data object from an SQL query, see the
Informatica 10.1 Developer Mapping Guide.

Metadata Manager
This section describes new Metadata Manager features in version 10.1.

Universal Resources
Effective in version 10.1, you can create universal resources to extract metadata from some metadata
sources for which Metadata Manager does not package a model. For example, you can create a universal
resource to extract metadata from an Apache Hadoop Hive Server, QlikView, or Talend metadata source.

To extract metadata from these sources, you first create an XConnect that represents the metadata source
type. The XConnect includes the model for the metadata source. You then create one or more resources that

154 Chapter 13: New Features (10.1)


are based on the model. The universal resources that you create behave like packaged resources in Metadata
Manager.

For more information about universal resources, see the "Universal Resources" chapter in the Informatica
10.1 Metadata Manager Administrator Guide.

Incremental Loading for Oracle and Teradata Resources


Effective in version 10.1, you can enable incremental loading for Oracle resources and for Teradata
resources. An incremental load causes Metadata Manager to load recent changes to the metadata instead of
loading complete metadata. Incremental loading reduces the amount of time it takes to load the resource.

To enable incremental loading for an Oracle resource or for a Teradata resource, enable Incremental load
option in the resource configuration properties. This option is disabled by default.

For more information about incremental loading for Oracle and Teradata resources, see the "Database
Management Resources" chapter in the Informatica 10.1 Metadata Manager Administrator Guide.

Hiding Resources in the Summary View


Effective in version 10.1, you can prevent a resource and its child objects from being displayed in the
summary view of data lineage diagrams. To hide a resource, enable the Hide in Summary Lineage option on
the Properties page of the resource configuration properties. This option is available for all resource types. It
is disabled by default.

You can hide objects such as staging databases from data lineage diagrams. If you want to view the hidden
objects, you can switch from the summary view to the detail view through the task bar.

For more information about the summary view of data lineage diagrams, see the "Working with Data Lineage"
chapter in the Informatica 10.1 Metadata Manager User Guide.

Creating an SQL Server Integration Services Resource from


Multiple Package Files
Effective in version 10.1, you can create a Microsoft SQL Server Integration Services resource that extracts
metadata from packages in separate package (.dtsx) files. The package files must be in the same directory.

To create a resource that extracts metadata from packages in different package files, specify the directory
that contains the package files in the Directory resource configuration property.

For more information about creating and configuring Microsoft SQL Server Integration Services resources,
see the "Database Management Resources" chapter in the Informatica 10.1.1 Metadata Manager
Administrator Guide.

Metadata Manager 155


Metadata Manager Command Line Programs
Effective in version 10.1, Metadata Manager has a new command line program. The mmXConPluginUtil
command line program generates the image mapping information or the plug-in for a universal XConnect.

The following table describes the mmXConPluginUtil commands:

Command Name Description

generateImageMapping Generates the image mapping information for a universal XConnect.

generatePlugin Generates the plug-in for a universal XConnect.

For more information about the mmXConPluginUtil command line program, see the "mmXConPluginUtil"
chapter in the Informatica 10.1 Metadata Manager Command Reference.

Application Properties
Effective in version 10.1 you can configure new application properties in the Metadata Manager
[Link] file. This feature is also available in 9.6.1 HotFix 4. It is not available in 10.0.

The following table describes new Metadata Manager application properties in [Link]:

Property Description

[Link] Maximum number of errors that the Metadata Manager Service can
encounter before the custom resource load fails.

[Link] Number of errors that the Metadata Manager Service writes to the in
memory cache and to the [Link] file in one batch when you load a custom
resource.

For more information about the [Link] file, see the "Metadata Manager Properties Files" appendix in
the Informatica 10.1 Metadata Manager Administrator Guide.

Migrate Business Glossary Audit Trail History and Links to


Technical Metadata
Effective in version 10.1, you can migrate audit trail history and links to technical metadata when you export
business glossaries. You can import the audit trail history and links in the Analyst tool.

This feature is also available in 9.6.1 HotFix 4. It is not available in 10.0.

For more information, see the Informatica 10.1 Upgrading from Version 9.5.1 Guide.

156 Chapter 13: New Features (10.1)


PowerCenter
This section describes new PowerCenter features in version 10.1.

Create a Source Definition from a Target Definition


Effective in version 10.1, you can create a source definition from a target definition. You can drag the target
definitions into the Source Analyzer to create source definitions.

For more information, see the Informatica 10.1 PowerCenter Designer Guide.

Create an FTP Connection Type from the Command Line


Effective in version 10.1, you can create an FTP connection with the pmrep CreateConnection command.

For more information, see the "pmrep Command Reference" chapter in the Informatica 10.1 Command
Reference.

Pushdown Optimization for Greenplum


Effective in version 10.1, the PowerCenter Integration Service can push transformation logic to Greenplum
sources and targets when the connection type is ODBC.

For more information, see the Informatica PowerCenter 10.1 Advanced Workflow Guide.

PowerExchange Adapters
This section describes new PowerExchange adapter features in version 10.1.

PowerExchange Adapters for Informatica


This section describes new Informatica adapter features in version 10.1.

PowerExchange for HDFS


Effective in version 10.1, you can use PowerExchange for HDFS to read Avro and Parquet data files from and
write Avro and Parquet data files to HDFS and local file system without using a Data Processor
transformation.

For more information, see the Informatica PowerExchange for HDFS 10.1 User Guide.

PowerExchange for Hive


Effective in version 10.1, you can use char and varchar data types in mappings. You can also select different
Hive databases when you create a data object and a mapping.

For more information, see the Informatica PowerExchange for Hive 10.1 User Guide.

PowerExchange for Teradata Parallel Transporter API


Effective in version 10.1, you can enable Teradata Connector for Hadoop (TDCH) to run a Teradata mapping
on a Blaze engine. When you run the mapping, the Data Integration Service pushes the mapping to a Hadoop
cluster and processes the mapping on a Blaze engine, which significantly increases the performance.

For more information, see the Informatica PowerExchange for Teradata Parallel Transporter API 10.1 User
Guide.

PowerCenter 157
PowerExchange Adapters for PowerCenter
This section describes new PowerCenter adapter features in version 10.1.

PowerExchange for Greenplum


Effective in version 10.1, you can configure Kerberos authentication for native Greenplum connections.

This feature is also available in 9.6.1 HotFix 4. It is not available in 10.0.

For more information, see the "Greenplum Sessions and Workflows" chapter in the Informatica 10.1
PowerExchange for Greenplum User Guide for PowerCenter.

Security
This section describes new security features in version 10.1.

Custom Cipher Suites


Effective in version 10.1, you can customize the cipher suites that the Informatica domain uses for secure
communication within the domain and secure connections to web application services. You can create a
whitelist and blacklist to enable or block specific ciphersuites. This feature is also available in 9.6.1 HotFix 4.
It is not available in 10.0.

The Informatica domain uses an effective list of cipher suites that uses the cipher suites in the default and
whitelists but blocks cipher suites in the blacklist.

For more information, see the "Domain Security" chapter in the Informatica 10.1 Security Guide.

Operating System Profiles


Effective in version 10.1, if the Data Integration Service runs on UNIX or Linux, you can create operating
system profiles and configure the Data Integration Service to use operating system profiles. Use operating
system profiles to increase security and to isolate the run-time user environment in Informatica products
such as Big Data Management, Data Quality, and Intelligent Data Lake.

The Data Integration Service uses operating system profiles to run mappings, profiles, scorecards, and
workflows. The operating system profile contains the operating system user name, service process variables,
Hadoop impersonation properties, the Analyst Service properties, environment variables, and permissions.
The Data Integration Service runs the mapping, profile, scorecard, or workflow with the system permissions
of the operating system user and the properties defined in the operating system profile.

For more information about operating system profiles, see the "Users and Groups" chapter in the Informatica
10.1 Security Guide.

Application and Application Object Permissions


Effective in version 10.1, you can assign permissions to control the level of access that a user or group has
on applications and application objects such as mappings and workflows.

For more information about application and application object permissions, see the "Permissions" chapter in
the Informatica 10.1 Security Guide.

158 Chapter 13: New Features (10.1)


Transformations
This section describes new transformation features in version 10.1.

Informatica Transformations
This section describes new features in Informatica transformation in version 10.1.

Address Validator Transformation


This section describes the new Address Validator transformation features.

The Address Validator transformation contains additional address functionality for the following countries:

Ireland

Effective in version 10.1, you can return the eircode for an address in Ireland. An eircode is a seven-
character code that uniquely identifies an Ireland address. The eircode system covers all residences,
public buildings, and business premises and includes apartment addresses and addresses in rural
townlands.

To return the eircode for an address, select a Postcode port or a Postcode Complete port.

France

Effective in version 10.1, address validation uses the Hexaligne 3 repository of the National Address
Management Service to certify a France address to the SNA standard.

The Hexaligne 3 data set contains additional information on delivery point addresses, including sub-
building details such as building names and residence names.

Germany

Effective in version 10.1, you can retrieve the three-digit street code part of the Frachtleitcode or Freight
Code as an enrichment to a valid Germany addresses. The street code identifies the street within the
address.

To retrieve the street code as an enrichment to verified Germany addresses, select the Street Code DE
port. Find the port in the DE Supplementary port group.

Informatica adds the Street Code DE port in version 10.1.

South Korea

Effective in version 10.1, you can verify older, lot-based addresses and addresses with older, six-digit
post codes in South Korea. You can verify and update addresses that use the current format, the older
format, and a combination of the current and older formats. A current South Korea address has a street-
based format and includes a five-digit post code. A non-current address has a lot-based format and
includes a six-digit post code.

To verify a South Korea address in an older format and to change the information to another format, use
the Address Identifier KR ports. You update the address information in two stages. First, run the address
validation mapping in batch or interactive mode and select the Address Identifier KR output port. Then,
run the address validation mapping in address code lookup mode and select the Address Identifier KR
input port. Find the Address Identifier KR input port in the Discrete port group. Find the Address Identifier
KR output port in the KR Supplementary port group.

To verify that the Address Validator transformation can read and write the address data, add the
Supplementary KR Status port to the transformation.

Informatica adds the Address Identifier KR ports, the Supplementary KR Status port, and the KR
Supplementary port group in version 10.1.

Transformations 159
Effective in version 10.1, you can retrieve South Korea address data in the Hangul script and in a Latin
script.

United Kingdom

Effective in version 10.1, you can retrieve delivery point type data and organization key data for a United
Kingdom address. The delivery point type is a single-character code that indicates whether the address
points to a residence, a small organization, or a large organization. The organization key is an eight-digit
code that the Royal Mail assigns to small organizations.

To add the delivery point type to a United Kingdom address, use the Delivery Point Type GB port. To add
the organization key to a United Kingdom address, use the Organization Key GB port. Find the ports in
the UK Supplementary port group. To verify that the Address Validator transformation can read and write
the data, add the Supplementary UK Status port to the transformation.

Informatica adds the Delivery Point Type GB port and the Organization Key GB port in version 10.1.

These features are also available in 9.6.1 HotFix 4. They are not available in 10.0.

For more information, see the Informatica 10.1 Address Validator Port Reference.

Data Processor Transformation


This section describes new Data Processor transformation features.

REST API
An application can call the Data Transformation REST API to run a Data Transformation service.

For more information, see the Informatica 10.1 Data Transformation REST API User Guide.

XmlToDocument_45 Document Processor


The XmlToDocument_45 document processor converts XML data to document formats, such as PDF or
Excel. This component uses the Business Intelligence and Reporting Tool (BIRT) version 4.5 Eclipse add-on.
Document processors for older versions of BIRT are also available.

For more information, see the Informatica10.1 Data Transformation User Guide.

Relational to Hierarchical Transformation


This section describes the Relational to Hierarchical transformation that you create in the Developer tool.

The Relational to Hierarchical transformation is an optimized transformation introduced in version 10.1 that
converts relational input to hierarchical output.

For more information, see the Informatica 10.1 Developer Transformation Guide.

Workflows
This section describes new workflow features in version 10.1.

PowerCenter Workflows
This section describes new features in PowerCenter workflows in version 10.1.

160 Chapter 13: New Features (10.1)


Assign Workflows to the PowerCenter Integration Service
Effective in version 10.1, you can assign a workflow to the PowerCenter Integration Service with the pmrep
AssignIntegrationService command.

For more information, see the "pmrep Command Reference" chapter in the Informatica 10.1 Command
Reference.

Workflows 161
Chapter 14

Changes (10.1)
This chapter includes the following topics:

• Support Changes , 162


• Application Services, 163
• Big Data, 164
• Business Glossary, 164
• Command Line Programs, 165
• Exception Management, 166
• Informatica Developer, 166
• Live Data Map, 166
• Metadata Manager, 167
• PowerCenter, 168
• Security, 168
• Transformations, 169
• Workflows, 170

Support Changes
Effective in version 10.1, Informatica announces the following support changes:

Informatica Installation
Effective in version 10.1, Informatica implemented the following change in operating system:

Support Change Level of Support Comments

SUSE 11 Added support Effective in version 10.1, Informatica added support for SUSE Linux Enterprise
Server 11.

Reporting Service (Deprecated)


Effective in version 10.1, Informatica deprecated the Reporting Service. Informatica will drop support for the
Reporting Service in a future release. The Reporting Service custom roles are deprecated.

If you upgrade to version 10.1, you can continue to use the Reporting Service. You can continue to use Data
Analyzer. Informatica recommends that you begin using a third-party reporting tool before Informatica drops

162
support. You can use the recommended SQL queries for building all the reports shipped with earlier versions
of PowerCenter.

If you install version 10.1, you cannot create a Reporting Service. You cannot use Data Analyzer. You must
use a third-party reporting tool to run PowerCenter and Metadata Manager reports.

For information about the PowerCenter Reports, see the Informatica PowerCenter Using PowerCenter Reports
Guide. For information about the PowerCenter repository views, see the Informatica PowerCenter Repository
Guide. For information about the Metadata Manager repository views, see the Informatica Metadata Manager
View Reference.

Reporting and Dashboard Service (Deprecated)


Effective in version 10.1, Informatica deprecated the Reporting and Dashboards Service. Informatica will drop
support for the Reporting and Dashboards Service in a future release.

If you upgrade to version 10.1, you can continue to use the Reporting and Dashboards Service. Informatica
recommends that you begin using a third-party reporting tool before Informatica drops support. You can use
the recommended SQL queries for building all the reports shipped with earlier versions of PowerCenter.

If you install version 10.1, you cannot create a Reporting and Dashboards Service. You must use a third-party
reporting tool to run PowerCenter and Metadata Manager reports.

For information about the PowerCenter Reports, see the Informatica PowerCenter Using PowerCenter Reports
Guide. For information about the PowerCenter repository views, see the Informatica PowerCenter Repository
Guide. For information about the Metadata Manager repository views, see the Informatica Metadata Manager
View Reference.

Application Services
This section describes changes to application services in version 10.1

System Services
This section describes changes to system services in version 10.1.

Email Service for Scorecard Notifications


Effective in version 10.1, scorecard notifications use the email server that you configure on the Email Service.

Previously, scorecard notifications used the email server that you configured on the domain.

For more information about the Email Service, see the "System Services" chapter in the Informatica 10.1
Application Service Guide.

Application Services 163


Big Data
This section describes changes to big data features.

JCE Policy File Installation


Effective in version 10.1, Informatica Big Data Management ships the JCE policy file and installs it when you
run the installer.

Previously, you had to download and manually install the JCE policy file for AES encryption.

Business Glossary
This section describes changes to Business Glossary in version 10.1.

Custom Relationships
Effective in version 10.1, you can create custom relationships in the Manage Glossary Relationships
workspace. Under Manage click Glossary Relationships to open the Manage Glossary Relationships
workspace.

Previously, you had to edit the glossary template to create custom relationships.

For more information, see the "Glossary Administration" chapter in the Informatica 10.1 Business Glossary
Guide.

Bi-Directional Default Relationships


Effective in version 10.1, the default business term relationships are bi-directional.

Previously, the default relationships were uni-directional.

For more information, see the "Finding Glossary Content" chapter in the Informatica 10.1 Business Glossary
Guide.

Governed By Relationship
Effective in version 10.1, you can no longer create a "governed by" relationship between terns. The "governed
by" relationship can only be used between a policy and a term.

Previously, you could create a "governed by" relationship between terms.

For more information, see the Informatica 10.1 Business Glossary Guide.

Glossary Workspace
Effective in version 10.1, in the Glossary workspace, the Analyst tool displays multiple Glossary assets in
separate tabs.

Previously, the Analyst tool displayed only one Glossary asset in the Glossary workspace.

For more information, see the "Finding Glossary Content" chapter in the Informatica 10.1 Business Glossary
Guide.

164 Chapter 14: Changes (10.1)


Business Glossary Desktop
Effective in version 10.1, you can install Business Glossary Desktop on the OS X operating system.

Previously, Business Glossary Desktop was available only for Windows.

For more information, see the Informatica 10.1 Business Glossary Desktop Installation and Configuration
Guide.

Kerberos Authentication for Business Glossary Command Program


Effective in version 10.1, Business Glossary command program is supported in a domain that uses Kerberos
authentication.

Previously, Business Glossary command program was not supported in a domain that uses Kerberos
authentication.

For more information, see the "infacmd bg Command Reference" chapter in the Informatica 10.1 Command
Reference.

Command Line Programs


This section describes changes to commands in version 10.1.

infacmd isp Commands


The following table describes the deprecated infacmd isp commands:

Command Description

BackupDARepositoryCont Backs up content for a Data Analyzer repository to a binary file. When you back up the
ents content, the Reporting Service saves the Data Analyzer repository including the
repository objects, connection information, and code page information.

CreateDARepositoryConte Creates content for a Data Analyzer repository. You add repository content when you
nts create the Reporting Service or delete the repository content. You cannot create content
for a repository that already includes content.

CreateReportingService Creates a Reporting Service in the domain.

DeleteDARepositoryConte Deletes repository content from a Data Analyzer repository. When you delete repository
nts content, you also delete all privileges and roles assigned to users for the Reporting
Service.

RestoreDARepositoryCont Restores content for a Data Analyzer repository from a binary file. You can restore
ents metadata from a repository backup file to a database. If you restore the backup file on
an existing database, you overwrite the existing content.

UpdateReportingService Updates or creates the service and lineage options for the Reporting Service.

Command Line Programs 165


Command Description

UpgradeDARepositoryCont Upgrades content for a Data Analyzer repository.


ents

UpgradeDARepositoryUser Upgrades users and groups in a Data Analyzer repository. When you upgrade the users
s and groups in the Data Analyzer repository, the Service Manager moves them to the
Informatica domain.

For more information, see the "infacmd isp Command Reference" chapter in the Informatica 10.1 Command
Reference.

Exception Management
This section describes the changes to exception management in version 10.1.

Default search and replace operations in an exception task

Effective in version 10.1, you can configure the options in an exception task to find and replace data
values in one or more columns. You can specify a single column, or you can specify any column that
uses a string, date, or numeric data type. By default, a find and replace operation applies to all columns
that contain string data.

Previously, a find and replace operation ran by default on all of the data in the task. In version 10.1, you
cannot configure a find and replace operation to run on all of the data in the task.

For more information, see the Exception Records chapter in the Informatica 10.1 Exception Management
Guide.

Informatica Developer
This section describes the changes to the Developer tool in version 10.1.

Keyboard Shortcuts
Effective in version 10.1, the shortcut key to select the next area is CTRL + Tab followed by pressing the Tab
button three times.

Previously, the shortcut key was Ctrl+Tab followed by Ctrl+Tab.

For more information, see the "Keyboard Shortcuts" appendix in the Informatica 10.1.1 Developer Tool Guide.

Live Data Map


This section describes changes to Live Data Map in version 10.1.

166 Chapter 14: Changes (10.1)


Enterprise Information Catalog
This section describes the changes to Enterprise Information Catalog.

Home Page
Effective in version 10.1, the home page displays the trending search, top 50 assets, and recently viewed
assets. Trending search refers to the terms that were searched the most in the catalog in the last week. The
top 50 assets refer to the assets with the most number of relationships with other assets in the catalog.

Previously, the Enterprise Information Catalog home page displayed the search field, the number of resources
that Live Data Map scanned metadata from, and the total number of assets in the catalog.

For more information about the Enterprise Information Catalog home page, see the "Getting Started with
Informatica Enterprise Information Catalog" chapter in the Informatica 10.1 Enterprise Information Catalog
User Guide.

Asset Overview
Effective in version 10.1, you can view the schema name associated with an asset in the Overview tab.

Previously, the Overview tab for an asset did not display the associated schema name.

For more information about assets in Enterprise Information Catalog, see the Informatica 10.1 Enterprise
Information Catalog User Guide.

Live Data Map Administrator Home Page


Effective in version 10.1, the Start workspace displays the total number of assets in the catalog, unused
resources, and unassigned connections in addition to many other monitoring statistics.

Previously, the Live Data Map Administrator home page displayed several monitoring statistics, such as
number of resources for each resource type, task distribution, and predictive job load.

For more information about Live Data Map Administrator home page, see the "Using Live Data Map
Administrator" chapter in the Informatica 10.1 Live Data Map Administrator Guide.

Metadata Manager
This section describes changes to Metadata Manager in version 10.1.

Microsoft SQL Server Integration Services Resources


Effective in version 10.1, Metadata Manager organizes SQL Server Integration Services objects in the
metadata catalog according to the connections in which the objects are used. The metadata catalog does
not contain a separate folder for each package. To select an object such as a table or column in the
metadata catalog, navigate to the object through the source or target connection in which the object is used.

Previously, Metadata Manager organized SQL Server Integration Services objects by connection and by
package. The metadata catalog contained a Connections folder in addition to a folder for each package.

For more information about SQL Server Integration Services resources, see the "Data Integration Resources"
chapter in the Informatica 10.1 Metadata Manager Administrator Guide.

Metadata Manager 167


Certificate Validation for Command Line Programs
Effective in version 10.1, when you configure a secure connection for the Metadata Manager web application,
the Metadata Manager command line programs do not accept security certificates that have errors. The
property that controls whether a command line program can accept security certificates that have errors is
removed. This feature is also available in 9.6.1 HotFix 4. It is not available in 10.0.

Previously, the [Link] property in the [Link] file controlled


certificate validation for mmcmd or mmRepoCmd. You could configure the property to either accept all
certificates or accept only certificates that do not have errors.

Because the command line programs no longer accept security certificates that have errors, the
[Link] property is obsolete. The property no longer appears in the
[Link] files for mmcmd or mmRepoCmd.

For more information about certificate validation for mmcmd and mmRepoCmd, see the "Metadata Manager
Command Line Programs" chapter in the Informatica 10.1 Metadata Manager Administrator Guide.

PowerCenter
This section describes changes to PowerCenter in version 10.1.

Operating System Profiles


Effective in version 10.1, the OS Profile tab in the Security page of the Administrator tool is renamed to the
Operating System Profiles tab. To create operating system profiles, go to the Security Actions menu and
click Create Operating System Profile. You can also assign a default operating system profile to users and
groups when you create an operating system profile. Previously, the Security Actions menu had an Operating
System Profiles Configuration option.

For more information about managing operating system profiles, see the "Users and Groups" chapter in the
Informatica 10.1 Security Guide.

Security
This section describes changes to security in version 10.1.

Transport Layer Security (TLS)


Effective in version 10.1, Informatica uses TLS v1.1 and v1.2 to encrypt traffic. Additionally, Informatica
disabled support for TLS v1.0 and lower.

The changes affect secure communication within the Informatica domain, secure connections to web
application services, and connections from the Informatica domain to an external destination.

This feature is also available in 9.6.1 HotFix 4. It is not available in 10.0.

Permissions
Effective in version 10.1, the following Model repository objects have permission changes:

• Applications, mappings, and workflows. All users in the domain are granted all permissions.
• SQL data services and web services. Users with effective permissions are assigned direct permissions.

168 Chapter 14: Changes (10.1)


The changes affect the level of access that users and groups have to these objects.

After you upgrade, you might need to review and change the permissions to ensure that users have
appropriate permissions on objects.

For more information, see the "Permissions" chapter in the Informatica 10.1 Security Guide.

Transformations
This section describes changed transformation behavior in version 10.1.

Informatica Transformations
This section describes the changes to the Informatica transformations in version 10.1.

Address Validator Transformation


This section describes the changes to the Address Validator transformation.

The Address Validator transformation contains the following updates to address functionality:

Address validation engine upgrade

Effective in version 10.1, the Address Validator transformation uses version 5.8.1 of the Informatica
Address Verification software engine. The engine enables the features that Informatica adds to the
Address Validator transformation in version 10.1.

Previously, the transformation used version 5.7.0 of the Informatica AddressDoctor software engine.

Product name change

Informatica Address Verification is the new name of Informatica AddressDoctor. Informatica


AddressDoctor became Informatica Address Verification in version 5.8.0.

Changes to geocode options for United Kingdom addresses

Effective in version 10.1, you can select Rooftop as a geocode data property to retrieve rooftop-level
geocodes for United Kingdom addresses.

Previously, you selected the Arrival Point geocode data property to retrieve rooftop-level geocodes for
United Kingdom addresses.

If you upgrade a repository that includes an Address Validator transformation, you do not need to
reconfigure the transformation to specify the Rooftop geocode property. If you specify rooftop geocodes
and the Address Validator transformation cannot return the geocodes for an address, the transformation
does not return any geocode data.

Support for unique property reference numbers in United Kingdom input data

Effective in version 10.1, the Address Validator transformation has a UPRN GB input port and a UPRN GB
output port.

Previously, the transformation had a UPRN GB output port.

Use the input port to retrieve a United Kingdom address for a unique property reference number that you
enter. Use the UPRN GB output port to retrieve the unique property reference number for a United
Kingdom address.

These features are also available in 9.6.1 HotFix 4. They are not available in 10.0.

For more information, see the Informatica 10.1 Address Validator Port Reference.

Transformations 169
Data Processor Transformation
This section describes the changes to the Data Processor transformation.

Excel 2013
Effective in version 10.1, the ExcelToXml_03_07_10 document processor can process Excel 2013 files. You
can use the document processor in a Data Processor transformation as a pre-processor that converts the
format of a source document before a transformation.

For more information, see the Informatica 10.1 Data Transformation User Guide.

Performance Improvement with Avro or Parquet Input


A Data Processor transformation receives Avro or Parquet data input from a complex file reader object.
Effective in version 10.1, you can configure the complex file reader settings to optimize performance for Avro
or Parquet input.

For more information, see the Informatica10.1 Data Transformation User Guide.

Performance Improvement with COBOL Input in the Hadoop Environment


Effective in version 10.1, you can configure the complex file reader settings to optimize performance when
processing large COBOL files in a Hadoop environment. Use a regular expression to define how to split
record processing for an appropriate COBOL input file.

For more information, see the Informatica10.1 Data Transformation User Guide.

Exception Transformations
Effective in version 10.1, you can configure a Bad Record Exception transformation and a Duplicate Record
Exception transformation to create exception tables in a non-default database schema.

Previously, you configured the transformations to create exception tables in the default schema on the
database.

This feature is also available in 9.6.1 HotFix 4. It is not available in 10.0.

For more information, see the Informatica 10.1 Developer Transformation Guide.

Workflows
This section describes changed workflow behavior in version 10.1.

Informatica Workflows
This section describes the changes to Informatica workflow behavior in version 10.1.

Parallel Execution of Human Tasks


Effective in version 10.1, the Data Integration Service can run Human tasks on multiple sequence flows in a
workflow in parallel. To create the parallel sequence flows, add Inclusive gateways to the workflow in the
Developer tool. Add one or more Human tasks to each sequence flow between the Inclusive gateways.

Previously, you added one or more Human tasks to a single sequence flow between Inclusive gateways.

For more information, see the Informatica 10.1 Developer Workflow Guide.

170 Chapter 14: Changes (10.1)


Chapter 15

Release Tasks (10.1)


This chapter includes the following topics:

• Metadata Manager , 171


• Security, 172

Metadata Manager
This section describes release tasks for Metadata Manager in version 10.1.

Informatica Platform Resources


Effective in version 10.1, to extract metadata from an Informatica 10.0 application that is deployed to a Data
Integration Service, you must install the version 10.0 Command Line Utilities. Install the utilities in a directory
that the 10.1 Metadata Manager Service can access. For best performance, extract the files to a directory on
the machine that runs the Metadata Manager Service.

When you configure the resource, you must also enter the file path to the 10.0 Informatica Command Line
Utilities installation directory in the 10.0 Command Line Utilities Directory property.

For more information about Informatica Platform resources, see the "Data Integration Resources" chapter in
the Informatica 10.1 Metadata Manager Administrator Guide.

Verify the Truststore File for Command Line Programs


Effective in version 10.1, when you configure a secure connection for the Metadata Manager web application,
the Metadata Manager command line programs do not accept security certificates that have errors. The
property that controls whether a command line program can accept security certificates that have errors is
removed. This feature is also available in 9.6.1 HotFix 4. It is not available in 10.0.

The [Link] property in the [Link] file controlled certificate


validation for mmcmd or mmRepoCmd. You could set the property to one of the following values:

• NO_AUTH. The command line program accepts the digital certificate, even if the certificate has errors.
• FULL_AUTH. The command line program does not accept a security certificate that has errors.
The NO_AUTH setting is no longer valid. The command line programs now only accept security certificates
that do not contain errors.

If a secure connection is configured for the Metadata Manager web application, and you previously set the
[Link] property to NO_AUTH, you must now configure a truststore file. To configure

171
mmcmd or mmRepoCmd to use a truststore file, edit the [Link] file that is associated
with mmcmd or mmRepoCmd. Set the [Link] property to the path and file name of the truststore
file.

For more information about the [Link] files for mmcmd and mmRepoCmd, see the
"Metadata Manager Command Line Programs" chapter in the Informatica 10.1 Metadata Manager
Administrator Guide.

Security
This section describes release tasks for security features in version 10.1.

Permissions
After you upgrade to 10.1, the following Model repository objects have permission changes:

• Applications, mappings, and workflows. All users in the domain are granted all permissions.
• SQL data services and web services. Users with effective permissions are assigned direct permissions.
The changes affect the level of access that users and groups have to these objects.

After you upgrade, review and change the permissions on applications, mappings, workflows, SQL data
services, and web services to ensure that users have appropriate permissions on objects.

For more information, see the "Permissions" chapter in the Informatica 10.1 Security Guide.

172 Chapter 15: Release Tasks (10.1)


Part IV: Version 10.0
This part contains the following chapters:

• New Products (10.0), 174


• New Features (10.0), 176
• Changes (10.0), 227
• Release Tasks (10.0), 258

173
Chapter 16

New Products (10.0)


This chapter includes the following topic:

• PowerExchange Adapters, 174

PowerExchange Adapters

PowerExchange Adapters for Informatica


This section describes new Informatica adapters in version 10.0.

PowerExchange for JD Edwards EnterpriseOne


Effective in version 10.0, you can use PowerExchange for JD Edwards EnterpriseOne to extract data from JD
Edwards EnterpriseOne sources and write data to JD Edwards EnterpriseOne targets.

For more information, see the Informatica PowerExchange for JD Edwards EnterpriseOne 10.0 User Guide.

PowerExchange for LDAP


Effective in version 10.0, you can use PowerExchange for LDAP to read data from and write data to LDAP
directory servers.

For more information, see the Informatica PowerExchange for LDAP 10.0 User Guide.

PowerExchange for Microsoft Dynamics CRM


Effective in version 10.0, you can use PowerExchange for Microsoft Dynamics CRM to read data from and
write data to Microsoft Dynamics CRM. You can import Microsoft Dynamics CRM business entities as read
and write data objects to create and run mappings to extract data from or load data to a Microsoft Dynamics
CRM entity.

For more information, see the Informatica PowerExchange for Microsoft Dynamics CRM 10.0 User Guide.

PowerExchange for Netezza


Effective in version 10.0, you can perform the following tasks with PowerExchange for Netezza:

• You can use PowerExchange for Netezza to read data from and write data to Netezza databases. You can
process large volumes of data by using PowerExchange for Netezza.
• You can use the Secure Sockets Layer (SSL) protocol to configure a secure connection between Netezza
clients and the Netezza server.

For more information, see the Informatica PowerExchange for Netezza 10.0 User Guide.

174
PowerExchange for OData
Effective in version 10.0, you can use PowerExchange for OData to read data from an OData provider that
exposes data through an OData service. You can also run a profile against OData data objects.

For more information, see the Informatica PowerExchange for OData 10.0 User Guide.

PowerExchange Adapters 175


Chapter 17

New Features (10.0)


This chapter includes the following topics:

• Application Services, 176


• Big Data, 180
• Business Glossary, 182
• Command Line Programs, 185
• Connectivity, 192
• Data Types, 194
• Documentation, 195
• Domain, 196
• Informatica Administrator, 196
• Informatica Analyst, 201
• Informatica Developer, 202
• Informatica Development Platform, 206
• Mappings, 207
• Metadata Manager, 212
• PowerCenter, 215
• PowerExchange Adapters, 215
• Reference Data, 217
• Rule Specifications, 218
• Security, 219
• Transformation Language Functions, 220
• Transformations, 220
• Workflows, 225

Application Services
This section describes new application services features in version 10.0.

176
Disabling and Recycling Application Services
Effective in version 10.0, disabling and recycling application services have the following new features:

Planned and Unplanned Notes

When you disable or recycle an application service from the Administrator tool, you can specify whether
the action is planned or unplanned. Planned and unplanned notes appear on the Command History and
Events panels in the Domain view on the Manage tab.

Comments

When you disable or recycle an application service from the Administrator tool, you can optionally enter
comments about the action. Comments appear on the Command History and Events panels in the
Domain view on the Manage tab.

For more information, see the Informatica 10.0 Application Service Guide.

Data Integration Service


This section describes new Data Integration Service features in version 10.0.

Architecture
Effective in version 10.0, the Data Integration Service includes the following types of components:

Service components

Service components include modules that manage requests from client tools, the logical Data
Transformation Manager (LDTM) that optimizes and compiles jobs, and managers that manage
application deployment and caches. The service components run within the Data Integration Service
process. The Data Integration Service process must run on a node with the service role.

Compute component

The compute component of the Data Integration Service is the execution Data Transformation Manager
(DTM). The DTM extracts, transforms, and loads data to complete a data transformation job. The DTM
must run on a node with the compute role.

When the Data Integration Service runs on a single node, the service and compute components of the Data
Integration Service run on the same node. The node must have both the service and compute roles.

When the Data Integration Service runs on a grid, the service and compute components of the Data
Integration Service can run on the same node or on different nodes, based on how you configure the grid and
the node roles. When you configure a Data Integration Service grid to run jobs in separate remote processes,
the nodes in the grid can have a combination of the service only role, the compute only role, and both the
service and compute roles. Some nodes in the grid are dedicated to running the service processes while
other nodes are dedicated to running mappings.

For more information about Data Integration Service components, see the "Data Integration Service
Architecture" chapter in the Informatica 10.0 Application Service Guide.

DTM Resource Allocation Policy


Effective in version 10.0, the Data Transformation Manager resource allocation policy determines how to
allocate the CPU resources for tasks. The DTM uses an on-demand resource allocation policy to allocate CPU
resources.

For more information about the DTM resource allocation policy, see the "Data Integration Service
Architecture" chapter in the Informatica 10.0 Application Service Guide.

Application Services 177


ASCII Data Movement Mode
Effective in version 10.0, the logical Data Transformation Manager (LDTM) component of the Data Integration
Service determines whether to use the ASCII or Unicode data movement mode for mappings that read from a
flat file or relational source. The LDTM determines the data movement mode based on the character sets
that the mapping processes. When a mapping processes all ASCII data, the LDTM selects the ASCII mode. In
ASCII mode, the Data Integration Service uses use one byte to store each character, which can optimize
mapping performance. In Unicode mode, the service uses two bytes for each character.

For more information about the data movement mode, see the "Data Integration Service Architecture" chapter
in the Informatica 10.0 Application Service Guide.

Maximize Parallelism for Profiles


Effective in version 10.0, you can enable the Data Integration Service to maximize parallelism when it runs a
column profile and performs data domain discovery if you have the partitioning option. When you maximize
parallelism, the Data Integration Service dynamically divides the profiling data into partitions and uses
multiple threads to concurrently process the partitions. When the Data Integration Service uses additional
threads, the service can optimize profiling performance.

For more information about how to maximize parallelism, see the "Data Integration Service Management"
chapter in the Informatica 10.0 Application Service Guide.

Multiple Cache, Target, and Temporary Directories


Effective in version 10.0, you can configure multiple directories for the following Data Integration Service
properties:

Cache Directory

Configure multiple cache directories to optimize performance during cache partitioning for Aggregator,
Joiner, or Rank transformations.

Target Directory

Configure multiple target directories to optimize performance when multiple partitions write to a flat file
target.

Temporary Directories

Configure multiple temporary directories to optimize performance during cache partitioning for Sorter
transformations.

For more information about optimizing cache and target directories for partitioning, see the "Data Integration
Service Management" chapter in the Informatica 10.0 Application Service Guide.

Model Repository Service


This section describes new Model Repository Service features in version 10.0.

Version Control System Support


Effective in version 10.0, you can integrate the Model repository with a supported version control system.
When the Model repository is integrated with a version control system, the version control system protects
objects from being overwritten by other members of the development team. You can check objects out and
in, view and retrieve historical versions of objects, undo a checkout, and reassign a checked out object to
another user.

You can integrate the Model repository with the following version control systems:

• Perforce

178 Chapter 17: New Features (10.0)


• Subversion

For more information, see the "Model Repository Service" chapter in the Informatica 10.0 Application Service
Guide.

System Services
Effective in version 10.0, the domain includes system services. A system service is an application service
that can have a single instance in the domain. System services are automatically created for you when you
create or upgrade the domain. You can enable, disable, and configure system services.

The following image shows the System Services folder in the Domain Navigator:

The domain includes the following system services:

Email Service

The Email Service emails notifications for business glossaries and workflows. Enable the Email Service
to allow users to configure email notifications.

The Email Service emails the following notifications:

• Business glossary notifications.


• Workflow notifications. Workflow notifications include emails sent from Human tasks and
Notification tasks in workflows that the Data Integration Service runs.

Resource Manager Service

The Resource Manager Service manages computing resources in the domain and dispatches jobs to
achieve optimal performance and scalability. The Resource Manager Service collects information about
nodes with the compute role. The service matches job requirements with resource availability to identify
the best compute node to run the job.

Enable the Resource Manager Service when you configure a Data Integration Service grid to run jobs in
separate remote processes.

Scheduler Service

The Scheduler Service manages schedules for deployed mapping and workflow jobs in the domain.

Enable the Scheduler Service when you want to create schedules, assign jobs to them, and run
scheduled jobs.

For more information about system services, see the "System Services" chapter in the Informatica 10.0
Application Service Guide.

Application Services 179


Big Data
This section describes new big data features in version 10.0.

Big Data Management Configuration Utility


Effective in verison 10.0, you can use the Big Data Management Configuration Utility to automate part of the
configuration process for Big Data Management.

For more information, see the Informatica 10.0 Big Data Management Installation and Configuration Guide.

Hadoop Connection
Effective in version 10.0, you must configure a Hadoop connection when you run a mapping in the Hadoop
environment. You can edit the Hadoop connection to configure run-time properties for the Hadoop
environment. The run-time properties include properties for the Hive and Blaze engines.

The following image shows the Hadoop connection as a cluster type connection:

For more information, see the "Connections" chapter in the Informatica 10.0 Big Data Management User
Guide.

180 Chapter 17: New Features (10.0)


Hadoop Ecosystem
Effective in version 10.0, Informatica supports the following big data features and enhancements for the
Hadoop ecosystem:

Hadoop clusters on Amazon EC2

You can read data from and write data to Hortonworks HDP clusters that are deployed on Amazon EC2.

Hadoop distributions

You can connect to Hadoop clusters that run the following Hadoop distributions:

• Cloudera CDH 5.4


• MapR 4.0.2 with MapReduce 1 and MapReduce 2

Hive on Tez

You can use Hive on Tez as the execution engine for Hadoop clusters that run Hortonworks HDP.

Kerberos Authentication

You can use Microsoft Active Directory as the key distribution center for Cloudera CDH and Hortonworks
HDP Hadoop clusters.

Parameters for Big Data


Effective in version 10.0, you can use parameters to represent the following additional properties for big data:

• Complex file sources and targets


• Complex file sources and targets on HDFS
• Flat file sources and targets on HDFS
• HBase sources and targets
• Hive sources
• Hive targets in the Hadoop environment
• Run-time environment
For more information, see the "Mappings in a Hadoop Environment" chapter in the Informatica 10.0 Big Data
Management User Guide.

Run-Time and Validation Environments


Effective in version 10.0, you can select the Hadoop environment to run mappings on the Hadoop cluster.
When you select the Hadoop environment, you can also select the Hive or Blaze engine to push the mapping
logic to the Hadoop cluster. The Blaze engine is an Informatica proprietary engine for distributed processing
on Hadoop.

When you run a mapping in the Hadoop environment, you must configure a Hadoop connection for the
mapping. Validate the mapping to ensure that you can push the mapping logic to Hadoop. After you validate
a mapping for the Hadoop environment, you can run the mapping.

Big Data 181


The following image shows the Hadoop run-time and validation environments:

For more information, see the "Mappings in a Hadoop Environment" chapter in the Informatica 10.0 Big Data
Management User Guide.

Business Glossary
This section describes new Business Glossary features in version 10.0.

Approval Workflow
Effective in version 10.0, data stewards can publish Glossary assets after a voting process. The glossary
administrator configures the approval workflow for a glossary after which the data steward must publish or
reject all the assets in the glossary through a voting process. The glossary administrator can configure up to
two levels of approvals. The approvers can approve or reject the asset changes or abstain from voting. The
data steward publishes or rejects the asset based on the voting results.

Glossary assets that are published after an approval workflow have a new tab called Voting History in the
audit trail. This tab displays the details about the approval workflow.

For more information, see the "Approval Workflow" chapter in the Informatica 10.0 Business Glossary Guide.

Glossary Asset Attachments


Effective in version 10.0, you can add attachments to Glossary assets. Reference users can view the
attachments when they open the Glossary assets in the Glossary workspace.

For more information about asset attachments, see the "Glossary Content Management" chapter in the
Informatica 10.0 Business Glossary Guide. For more information about configuring the attachment directory,
see the "Analyst Service" chapter in the Informatica Application Service Guide.

182 Chapter 17: New Features (10.0)


Long String Data Type
Effective in version 10.0, you can create a custom property that is of the long string data type. The long string
data type does not have any limit on the number of characters that the content managers can use when
adding content to the field.

For more information about the long string data type, see the "Glossary Content Management" chapter in the
Informatica 10.0 Business Glossary Guide.

Support for Rich Text


Effective in version 10.0, data stewards can format content in rich text when they configure default asset
properties such as Description, Usage Context, Example. Custom properties that have a long string data type
also support rich text.

Data stewards can format the text in the following ways:

• Make the text bold, italicized, or underlined.


• Change the font and font color.
• Add an ordered or unordered list.
• Use predefined styles.
• Insert internal and external links to the text.

For more information about rich text, see the "Glossary Content Management" chapter in the Informatica 10.0
Business Glossary Guide.

Import and Export Enhancements


Effective in version 10.0, you can choose to import or export business glossaries with or without linked
assets from other glossaries, attachments, and the audit history.

Optionally, you can choose to run the import task in the background. While the Analyst tool imports
glossaries in the background, you can perform other tasks. After the import is complete, the Analyst tool
sends you a notification.

In the final step of the import wizard, the Analyst tool now displays an enhanced summary and conflict
resolution options.

For more information about the import and export enhancements, see the "Glossary Administration" chapter
in the Informatica 10.0 Business Glossary Guide.

Email Notifications
Effective in version 10.0, you can choose to receive notifications through email. You continue to receive
notifications in the Analyst tool. You can configure the email notification settings in the Glossary Settings
workspace.

For more information about email notifications, see the "Finding Glossary Content" chapter in the Informatica
10.0 Business Glossary Guide.

Relationship View Diagram Enhancements


Effective in version 10.0, the relationship view diagram has the following enhancements:

Business Glossary 183


View Full Asset Names
You have an option to view the full asset name and relationship name in the relationship view diagram. The
Analyst tool truncates the asset names and relationship names that are longer than 200 characters by
default.

Find Assets
You can search for assets that are displayed in the relationship view diagram.

Expand and Collapse Node


You can expand and collapse a node to show or hide the assets in the node.

Pan the Canvas


You can click and drag the relationship view canvas to pan across the canvas and view assets.

For more information, see the "Finding Glossary Content" chapter in the Informatica 10.0 Business Glossary
Guide.

Analyst Tool Privileges


Effective in version 10.0, you can assign users the privilege to view published Glossary assets in the
Administrator tool. Providing the View Glossaries privilege in the Administrator tool is equivalent to providing
read permission for glossaries and published Glossary assets in the Glossary Security workspace in the
Analyst tool.

For more information, see the Informatica 10.0 Security Guide.

Business Term Links


Effective in version 10.0, you can link profiles to business terms. The Analyst tool provides a hyperlink to
linked technical assets and data objects. The Analyst tool opens the data objects in their respective
workspaces when you click the hyperlink.

For more information, see the Informatica 10.0 Business Glossary Guide.

Glossary Security
Effective in version 10.0, the Analyst tool contains the following enhancements to the Glossary security:

Glossary Security User Interface


The Glossary Security workspace view displays the number of roles, users and groups.

Permissions and Privileges Wizard


In the Glossary Security workspace, when you use the wizard to add permissions or privileges to users, you
can sort Glossary assets by category and type. You can also now bulk assign read and write permissions to
all assets for a user.

Asset View
Effective in version 10.0, the asset view also displays the number of attachments and the name of the
glossary that contains the asset.

For more information, see the "Introduction to Business Glossary" chapter in the Informatica 10.0 Business
Glossary Guide.

184 Chapter 17: New Features (10.0)


Default Approvers
Effective in version 10.0, the service administrator can configure the default approvers for a glossary. Only
the default approvers that the service administrator specifies receive notification during the normal approval
process or can participate in level 1 voting during the advanced approval workflow.

For more information, see the "Glossary Administration" chapter in the Informatica 10.0 Business Glossary
Guide.

Command Line Programs


This section describes new and changed commands in version 10.0.

infacmd bg Command
The following table describes a new infacmd bg command:

Command Description

upgradeRepository Upgrades the Business Glossary data in the Model repository. Run this command after you
upgrade the domain.

infacmd dis Commands


The following table describes new infacmd dis commands:

Command Description

addParameterSetEntries Adds entries to a parameter set for a mapping or workflow that is deployed as an
application.

deleteParameterSetEntries Deletes entries from a parameter set for a mapping or workflow that is deployed as an
application. You can delete specific parameter set entries or you can delete all of the
parameter set entries.

listApplicationObjects Lists the objects that an application contains.

listComputeOptions Lists Data Integration Service properties for a node with the compute role.

listParameterSetEntries Lists the entries in a parameter set.

listParameterSets List the parameter sets in an application.

updateComputeOptions Updates Data Integration Service properties for a node with the compute role. Use the
command to override Data Integration Service properties for a specific compute node.

updateParameterSetEntries Updates entries in a parameter set for a mapping or workflow in an application. Enter
parameter name-value pairs to update, separated by spaces.

stopBlazeService Stops the components of the Blaze engine from running.

Command Line Programs 185


The following table describes changes to infacmd dis command options:

Command Description

UpdateServiceOptions The following options are added for memory allocation:


- [Link]
- [Link]
- [Link]
- [Link]
Use these options to specify the maximum amount of memory, in bytes, that the Data
Integration Service can allocate for a mapping, profile, SQL service, or web service
request.
The following options are added for workflow operations:
- [Link]
Use the option to enable or disable the module that runs workflows.
- [Link]
Use the option to specify the connection name of the database that stores run-
time metadata for workflows.
The [Link] option can be set to the following
values:
- IN_PROCESS. Runs jobs in the Data Integration Service process.
- OUT_OF_PROCESS. Runs jobs in separate DTM processes on the local node.
- OUT_OF_PROCESS_REMOTE. Runs jobs in separate DTM processes on remote
nodes.
Previously, the option could be set to true (IN_PROCESS) or false
(OUT_OF_PROCESS).
The following options are moved from the UpdateServiceProcessOptions command
to the UpdateServiceOptions command:
- [Link]
- [Link]
- [Link]
- [Link]
- [Link]
- [Link]
- [Link]
- [Link]
- [Link]
- [Link]
- [Link]
- [Link]
- [Link]
- [Link]
The following email server options are moved to the isp UpdateSMTPOptions
command for scorecard notifications:
- [Link]
- [Link]
- [Link]
- [Link]
- [Link]
The following email server options are removed for scorecard notifications:
- [Link]
- [Link]
- [Link]
The following email server options are moved to the es UpdateSMTPOptions
command for workflow notifications:
- [Link]
- [Link]
- [Link]
- [Link]

186 Chapter 17: New Features (10.0)


Command Description

- [Link]
- [Link]
- [Link]
- [Link]
The following email server options are removed:
- [Link]
- [Link]
The following options are removed for workflow operations:
- [Link]
- [Link]
- [Link]
- [Link]

UpdateServiceProcessOptions The [Link] option is obsolete. The remaining execution


options are moved to the UpdateServiceOptions command.

infacmd es Commands
The new infacmd es program manages the Email Service.

The following table describes the new infacmd es commands:

Command Description

ListServiceOptions Returns a list of properties that are configured for the Email Service.

UpdateServiceOptions Updates Email Service properties.

UpdateSMTPOptions Updates the email server properties for the Email Service.

infacmd hts Commands


All infacmd hts commands are obsolete.

The following table describes the obsolete infacmd hts commands and identifies the commands that you can
use to perform the corresponding actions in version 10.0:

Command Description

CreateDB Creates the database tables that store run-time metadata for Human tasks.
In version 10.0, all run-time metadata for workflows is stored in a common set of tables. Use infacmd
wfs CreateTables to create the workflow metadata tables.

DropDB Drops the database tables that store run-time metadata for Human tasks.
In version 10.0, all run-time metadata for workflows is stored in a common set of tables. Use infacmd
wfs DropTables to drop the workflow metadata tables.

Exit Stops a Human task and passes the records that the task identifies to the next stage in the workflow.
Use infacmd wfs BulkComplete to stop a Human task and to pass the records that the task identifies to
the next stage in the workflow.

Command Line Programs 187


infacmd isp Commands
The following table describes new infacmd isp commands:

Command Description

GetSystemLogDirectory Prints the system log directory.

ListNodeRoles Lists all roles on a node in the domain.

UpdateNodeRole Updates the role on a node in the domain. You can enable or disable the service role or the
compute role on a node.

The following table describes changes to infacmd isp command options:

Command Description

AddDomainNode The following options are added:


- EnableServiceRole
- EnableComputeRole
Use these options to enable the service role or the compute role on a node when you
add the node to the domain.

AddNodeResource The following options are added:


- ResourceCategory. Use this option to specify that the resource is for the
PowerCenter Integration Service.
- ResourceValue. This option is reserved for future use.

CreateConnection The connection options for the Hadoop connection are added.

DisableNodeResource, The ResourceCategory option is added. Use this option to specify that the resource is
EnableNodeResource, for the PowerCenter Integration Service.
ListNodeResources, and
RemoveNodeResource

GetLog The following service types are added for the ServiceType option:
- ES. Email Service
- SCH. Scheduler Service
- RMS. Resource Manager Service

GetNodeName The Outputfile option is added. Use this option with a file name and path to print the
node name in a file.

ListNodes The NodeRole option is added. Use this option to list nodes with a specified role.

ListServices The following service types are added for the ServiceType option:
- ES. Email Service
- SCH. Scheduler Service
- RMS. Resource Manager Service

188 Chapter 17: New Features (10.0)


Command Description

PurgeMonitoring The NumDaysToRetainDetailedStat option is added. Use this option to configure the
number of days of detailed historical data that are retained in the Model repository
when the Data Integration Service purges statistics.

UpdateMonitoringOptions The DetailedStatisticsExpiryTime option is added. Use this option to configure when
the Data Integration Service purges detailed statistics from the Model repository.
The valid StatisticsExpiryTime values are changed. Minimum is 0. Maximum is 366.
Default is 180.

infacmd mrs Commands


The following table describes new infacmd mrs commands:

Command Description

CheckInObject Checks in a single object that is checked out. The object is checked in to the Model
repository.

CreateFolder Creates a folder in a project in a Model repository.

CreateProject Creates a project in the default Model repository.

DeleteFolder Deletes a folder from a project in a Model repository.

DeleteProject Deletes a project in a Model repository.

ListCheckedOutObjects Displays a list of objects that are checked out by a user.

ListFolders Lists the names of all of the folders in the project folder path that you specify.

ListLockedObjects Displays a list of objects that are locked by a user.

PopulateVCS Synchronizes the Model repository with a version control system.

ReassignCheckedOutObject Reassigns the ownership of a checked-out object to another user.

RenameFolder Renames a folder in a project.

UndoCheckout Reverts the checkout of a Model repository object.

UnlockObject Unlocks a Model repository object that is locked by a user.

Command Line Programs 189


The following table describes changes to infacmd mrs command options:

Command Description

UpdateServiceOptions The following options are added:


- [Link]
- [Link]
- [Link]
- [Link]
- [Link]
- [Link]
Use these options to configure versioning for the Model repository.

infacmd ms Commands
The following table describes new infacmd ms commands:

Command Description

GetRequestLog Writes the mapping log to the specified file.

UpgradeMappingParameterFile Converts a parameter file you created in a previous Informatica version to a


parameter file format that is valid for Informatica version 10.0.

The following table describes updated infacmd ms command options:

Command Description

RunMapping The following options are added:


- OptimizationLevel. Use to control the optimization methods that the Data Integration Service applies
to a mapping.
- PushdownType. Use to control the pushdown type that the Data Integration Service applies to a
mapping.
- CustomProperties. Use to define custom properties for a mapping at the request of Informatica
Global Customer Support.

infacmd rms Commands


The new infacmd rms program manages the Resource Manager Service.

The following table describes the new infacmd rms commands:

Command Description

ListComputeNodeAttributes Lists the compute node attributes that have been overridden for the specified node or
for all nodes.

ListServiceOptions Lists the properties for the Resource Manager Service.

SetComputeNodeAttributes Overrides the compute node attributes for the specified node.

UpdateServiceOptions Updates Resource Manager Service properties.

infacmd sch Commands


The new infacmd sch program manages the Scheduler Service.

190 Chapter 17: New Features (10.0)


The following table describes the new infacmd sch commands:

Command Description

CreateSchedule Creates a schedule for one or more deployed mapping or workflow objects.

DeleteSchedule Deletes one or more schedules.

ListSchedule Returns a list of jobs that are running on a schedule.

ListServiceOptions Returns a list of the properties that are configured for the Scheduler Service.

ListServiceProcessOptions Returns a list of the properties that are configured for a Scheduler Service process.

PauseAll Pauses all schedules.

PauseSchedule Pauses a schedule.

ResumeAll Resumes all schedules.

ResumeSchedule Resumes a schedule.

UpdateSchedule Updates a schedule configuration.

UpdateServiceOptions Updates the properties for the Scheduler Service.

UpdateServiceProcessOptions Updates the properties for a Scheduler Service process.

Upgrade Upgrades the Scheduler Service configuration.

infacmd wfs Commands


The following table describes new infacmd wfs commands:

Command Description

BulkComplete Stops operations for a Human task and passes the records that the task identifies to
the next stage in the workflow.

CreateTables Creates the database tables that store run-time metadata for workflows.

DropTables Drops the database tables that store run-time metadata for workflows.

ListMappingPersistedOutputs Lists the state of each persisted Mapping output from a Mapping task instance that
the command specifies.

SetMappingPersistedOutputs Updates the persisted mapping outputs for a Mapping task instance that you specify
or sets the persisted mapping outputs to null values.

UpgradeParameterFile Upgrades a parameter file to verify that the parameter values in the file are valid in
the current release. When you run the command, you identify a parameter file to
upgrade and you specify a target file to contain the valid parameter values.

Command Line Programs 191


The following table describes updated infacmd wfs command options:

Command Description

abortWorkflow The RuntimeInstanceID option is renamed to InstanceId. The option identifies the workflow
instance to abort.
The Wait option is removed.

cancelWorkflow The RuntimeInstanceID option is renamed to InstanceId. The option identifies the workflow
instance to cancel.
The Wait option is removed.

recoverWorkflow The RuntimeInstanceID option is renamed to InstanceId. The option identifies the workflow
instance to recover.
The Wait option is removed.

startWorkflow The ParameterSet option is added.


The option specifies the name of parameter set that the workflow use at run time.

infasetup Commands
The following table describes the new SystemLogDirectory option:

Command Description

DefineDomain The SystemLogDirectory option is added. Use this option to designate a custom location for
DefineGatewayNode logs.
DefineWorkerNode
UpdateGatewayNode
UpdateWorkerNode

For more information, see the Informatica 10.0 Command Reference.

pmrep Command Reference


The following table describes the pmrep massupdate command update:

Session Property Type Description

session_property This massupdate command updates the value of any supported session or session config
property whether or not it is overridden.

Connectivity
This section describes new connectivity features in version 10.0.

192 Chapter 17: New Features (10.0)


PowerCenter Connectivity
This section describes new connectivity features in version 10.0.

Native Connectivity to Microsoft SQL Server


Effective in version 10.0, you can use the DataDirect ODBC driver for Microsoft SQL Server to configure native
connectivity to Microsoft SQL Server databases from UNIX machines.

You can select the connection provider that you want to use to connect to the Microsoft SQL Server
database. You can select either the ODBC or OLE DB connection type. You can also enable the Integration
Service to use the Data Source Name (DSN) for the connection. Additionally, you can use NTLM
authentication to authenticate the user who connects to Microsoft SQL Server.

For more information about configuring native connectivity, see the "Connecting to Databases from UNIX"
appendix in the Informatica 10.0 Installation and Configuration Guide.

Connection Switching
Effective in version 10.0, in the Developer tool, you can switch the connection of a relational data object or
customized data object to use a different relational database connection. After you switch the connection,
the Developer tool updates the connection details for the data object in all Read, Write, and Lookup
transformations that are based on the data object. You might want to switch the connection when you
migrate from one database to another and want to simultaneously update the existing mappings to use the
new connection.

You can switch a connection to one of the following connection types:

• IBM DB2
• Microsoft SQL Server
• ODBC
• Oracle

The following image shows the dialog box that you use to switch a connection:

For more information, see the "Connections" chapter in the Informatica 10.0 Developer Tool Guide.

Connectivity 193
Data Types
This section describes new data type features in version 10.0.

Informatica Data Types


This section describes new data types in the Developer tool.

Decimal Data Type


Effective in version 10.0, some transformations support the Decimal data type with a precision of up to 38
digits. The decimal data type has a precision of 1 to 38 digits and a scale of 0 to 38. All other
transformations support the Decimal data type with a precision of up to 28 digits.

For transformations that support the Decimal data type of precision up to 38 digits, when the target contains
a precision that is greater than 38 digits and has high precision enabled, the Data Integration Service stores
the result as a double.

For more information, see the "Data Type Reference" appendix in the Informatica 10.0 Developer Tool Guide.

Mappings with the Decimal 38 Data Type


Effective in version 10.0, if you run a mapping that contains fields with precision greater than 28 but less than
or equal to 38 in high precision mode, the Data Integration Service processes a precision of up to 38 digits.
There is no behavior change if the precision is greater than 38 digits post upgrade.

The following table describes the post-upgrade behavior based on the applicable precision:

Precision Previous 10.0

Greater than 28 but less than or equal to 38 Double Decimal

Over 38 Double Double

For example, you have the following source data: 12345678901234567890123456789012345678

Previously, the target contains the following data: 12345678901234500000000000000000000000

In 10.0, the target contains the following data: 12345678901234567890123456789012345678

For more information, see the "Data Type Reference" appendix in the Informatica 10.0 Developer Tool Guide.

Timestamp with Time Zone


Effective in version 10.0, most transformations support the Timestamp with Time Zone data type. Timestamp
with Time Zone is a variant of the Timestamp data type that includes a time zone offset or time zone region
name.

When you import the Timestamp with Time Zone data type into the Developer tool, the associated
transformation data type is timestampWithTZ. timestampWithTZ has a precision of 36 and a scale of 9.
Timestamp with Time Zone displacement value range is from -12:00 < UTC < +14:00.

For more information, see the "Data Type Reference" appendix in the Informatica 10.0 Developer Tool Guide.

194 Chapter 17: New Features (10.0)


Timestamp with Local Time Zone
Effective in version 10.0, Timestamp with Local Time Zone data type is another variant of the Timestamp
data where the time zone data is normalized to the database time zone.

When you import the Timestamp with Local Time Zone data type into the Developer tool, the associated
transformation data type is date/time. The Timestamp with Local Time Zone data type is implicitly supported
by most transformations as the functionality is equivalent to Timestamp.

Timestamp (6) with Local Time Zone has a precision of 26 and a scale of 6. It is mapped to the date/time
(29,9) transformation data type.

For more information, see the "Data Type Reference" appendix in the Informatica 10.0 Developer Tool Guide.

Documentation
This section describes new or updated guides with the Informatica documentation in version 10.0.

The Informatica documentation contains the following new guides:


Informatica Accessibility Guide

Effective in version 10.0, the Informatica Accessibility Guide contains accessibility information and
keyboard shortcuts for Informatica Administrator, Informatica Analyst, and Informatica Developer. The
Informatica Accessibility Guide is included in the online help for the Administrator tool, Analyst tool, and
Developer tool.

For more information, see the Informatica 10.0 Accessibility Guide.

Informatica Big Data Management Security Guide

Effective in version 10.0, the Informatica Big Data Management Security Guide contains security
information for Big Data Management and Hadoop.

Previously, security for big data and Hadoop was documented in the Informatica Big Data Edition User
Guide.

The following guides are removed from the PowerCenter documentation:


PowerCenter Data Profiling Guide

Effective in version 10.0, the PowerCenter Data Profiling Guide is removed from the PowerCenter
documentation.

To learn more about profiling and discovery in Informatica, see the Informatica 10.0 Data Discovery
Guide.

Informatica Big Data Edition User Guide

Effective in version 10.0, the Informatica Big Data Edition User Guide is removed from the PowerCenter
documentation.

To learn more about big data in Informatica, see the Informatica 10.0 Big Data Management User Guide.

Informatica Big Data Edition Installation and Configuration Guide

Effective in version 10.0, the Informatica Big Data Edition Installation and Configuration Guide is removed
from the PowerCenter documentation.

To learn more about big data installation and configuration in Informatica, see the Informatica 10.0 Big
Data Management Installation and Configuration Guide.

The following guide is renamed:

Documentation 195
Informatica Data Service Performance Tuning Guide

Effective in version 10.0, the Informatica Data Services Performance Tuning Guide is renamed to the
Informatica Performance Tuning Guide.

To learn more about performance tuning in Informatica, see the Informatica 10.0 Performance Tuning
Guide.

Domain
This section describes new domain features in version 10.0.

Nodes
Effective in version 10.0, each node has a role that defines the purpose of the node.

A node can have the following roles:

Service role

A node with the service role can run application services. When you enable the service role on a node,
the Service Manager supports application services configured to run on that node.

Compute role

A node with the compute role can perform computations requested by remote application services.
When you enable the compute role on a node, the Service Manager manages the containers on the node.
A container is an allocation of memory and CPU resources. An application service uses the container to
remotely perform computations on the node. For example, a Data Integration Service grid includes Node
1 with the service role and Node 2 with the compute role. The Data Integration Service process that runs
on Node 1 runs a mapping within a container on Node 2.

Service and compute roles

A node with both roles can run application services and locally perform computations for those services.

By default, each gateway and worker node has both the service and compute roles enabled. If a node is
assigned to a Data Integration Service grid that is configured to run jobs on remote nodes with the compute
role, you might want to update the node role. Enable only the service role to dedicate the node to running the
Data Integration Service process. Enable only the compute role to dedicate the node to running Data
Integration Service mappings.

For more information about node roles, see the "Nodes" chapter in the Informatica 10.0 Administrator Guide.

Informatica Administrator
This section describes new Administrator tool features in version 10.0.

196 Chapter 17: New Features (10.0)


Manage Tab
Effective in version 10.0, the Manage tab has the following new features:

Domain view

The Domain view is an overview of the status of the domain. You can view information about the domain,
view historical information about the domain, and perform common actions.

The following image shows the Domain view on the Manage tab:

1. Domain Actions menu


2. Contents panel
3. Object Actions menu
4. Service State Summary
5. Memory usage indicator
6. CPU usage indicator

The Domain view contains the following information:

• Domain. You can view properties, logs, and past events for the domain. You can also shut down the
domain.
• Contents panel. Displays services, nodes, and grids in the domain. You can view properties, events,
logs, and dependencies for objects. You can also enable, disable, and recycle services and shut down
nodes.
• Filter. You can filter domain contents by state or service type. You can also search domain objects, or
navigate domain objects by type, grid, or folder.
• Service State Summary. Doughnut chart that displays the number and states of services in the
domain.
• Resource usage panels. Bar charts that compare memory and CPU usage for objects in the domain to
memory and CPU usage for all processes on the machine.
• Command History. Displays service lifecycle commands that users issue from the Administrator tool.
Lifecycle commands include enable, disable, and recycle.
• History view. Displays historical status, resource consumption, and events in the domain for a
selected time range.

Informatica Administrator 197


• Events panel. Displays events for services and nodes in the domain.

Navigator

You can search for and filter nodes, application services, and grids in the Domain Navigator on the
Services and Nodes view. You can search for an object by name. Or, you can filter the list of objects that
appear in the Navigator by object type.

Schedules view

You can view and manage schedules on the Schedules view.

For more information, see the Informatica 10.0 Administrator Guide.

Dependency Graph
Effective in version 10.0, the Dependency graph is accessed from the Domain view on the Manage tab.
Previously, the Dependency graph was accessed from the Services and Nodes view on the Domain tab.

The Dependency graph has a new user interface and additional functionality.

The following image shows the new Dependency graph:

You can perform the following tasks in the Dependency graph:

• View properties for a service, node, or grid.


• View logs for a service.
• Shut down a node.
• Enable or disable a service.
• Recycle a service.
• Disable downstream dependencies for a service. You can disable one or more services that depend on a
service. Downstream processes are disabled in abort mode.
• Recycle downstream dependencies for a service. You can recycle one or more services that depend on a
service. Downstream processes are recycled in abort mode.

For more information, see the Informatica 10.0 Administrator Guide.

Monitoring
Effective in version 10.0, the Monitoring tab in the Administrator tool is renamed the Monitor tab.

The Monitor tab has the following new features:

198 Chapter 17: New Features (10.0)


Views on the Monitor tab

The Monitor tab contains the following views:

• Summary Statistics view. Displays resource usage, object distribution, and object states for a
selected time range.
The following image shows the Summary Statistics view:

• Execution Statistics view. Contains the Navigator and views that were on the Monitoring tab in
previous versions.

Views on the Execution Statistics view

You can view statistics about ad hoc mapping jobs, deployed mapping jobs, and mapping objects in a
workflow.

Informatica Administrator 199


When you select one of these objects in the contents panel, the details panel displays the following new
views:

• Summary Statistics view. Displays throughput and resource usage information for the source and
target.
The following image shows the Summary Statistics view for a mapping job:

• Detailed Statistics view. Appears for jobs that run in separate local processes for longer than one
minute. Displays graphs of throughput and resource usage information for the source and target.
The following image shows the Detailed Statistics view for a mapping job in a workflow:

Configuration

Monitoring Configuration, formerly Global Settings, has the new optionPreserve Detailed Historical Data.
Use this option to configure when expired per-minute statistics can be purged from the Model repository.
Default is 14. Minimum is 1. Maximum is 14.

For more information, see the "Monitoring" chapter in the Informatica 10.0 Administrator Guide.

200 Chapter 17: New Features (10.0)


Informatica Analyst
This section describes new Analyst tool features in version 10.0.

Asset Versioning
Effective in version 10.0, when the Model repository is integrated with a version control system, the version
control system protects assets from being overwritten by other members of the development team. You can
check assets out and in, and undo the checkout of assets.

For more information, see the "Model Repository" chapter in the Informatica 10.0 Analyst Tool Guide.

Profiles
This section describes new Analyst tool features for profiles and profile results.

Column Profile
Effective in version 10.0, you can right-click the data object in the Library workspace to create a column
profile. The data object and folder options are updated automatically in the profile wizard.

For more information about column profile, see the "Column Profiles in Informatica Analyst" chapter in the
Informatica 10.0 Data Discovery Guide.

Column Profile Results


Effective in version 10.0, column profile results have the following new features and enhancements:

• View profile results in summary view and detailed view. The summary view provides a high-level overview
of the profile results in a grid format. The detailed view displays column-specific information in detail.
• View outliers in the summary view and detailed view of profile results. An outlier is a pattern, value, or
frequency for a column that does not fall within an expected range of values.
• View profile results for the latest profile run, historical profile run, and consolidated profile run. You can
view the profile results for any historical profile run. When you run the consolidated profile run, you can
view the latest results for each column in the profile.
• Compare profile results for two profile runs, and view the profile results in summary view and detailed
view.
• View profile results for a profile with JSON or XML data sources.
• Add business terms, tags, and comments to a profile and columns in the profile.
For more information about column profile results, see the "Column Profile Results in Informatica Analyst"
chapter in the Informatica 10.0 Data Discovery Guide.

Decimal Data Type


Effective in version 10.0, you can create profiles with columns that have the Decimal data type with a
precision of up to 38 digits.

For more information, see the Informatica 10.0 Data Discovery Guide.

JDBC Connectivity
Effective in version 10.0, you can specify a JDBC connection as a profiling warehouse connection for IBM
DB2 UDB, Microsoft SQL Server, and Oracle database types. You can create column profiles, rule profiles,
domain discovery, and scorecards with a JDBC connection as a profiling warehouse connection.

For more information, see the Informatica 10.0 Installation and Configuration Guide.

Informatica Analyst 201


Object Versioning
Effective in version 10.0, when the Model repository is integrated with a version control system, the version
control system protects objects from being overwritten by other members of the development team. You can
check profiles out and in, undo the checkout of profiles, and view and restore historical versions of profiles.

For more information about object versioning, see the "Column Profiles in Informatica Analyst" chapter in the
Informatica 10.0 Data Discovery Guide.

Rules and Filters


Effective in version 10.0, you can add or edit rules and filters when you create a column profile.

For more information, see the Informatica 10.0 Data Discovery Guide.

Scorecard Filter
Effective in version 10.0, you can create and apply a filter on the metrics of a scorecard.

For more information about scorecard filter, see the "Scorecards in Informatica Analyst" chapter in the
Informatica 10.0 Data Discovery Guide.

Informatica Developer
This section describes new Informatica Developer features in version 10.0.

Generate and Execute DDL


Effective in Informatica 10.0, you can create tables in a database by generating and executing a DDL script.
By using the Developer tool, you can generate a DDL script for one or more relational data objects in the
Model repository, and run the DDL script to create or replace tables in the target database. If a target already
exists in that database, you can drop the target and re-create it.

For more information, see the "Physical Data Objects" chapter in the Informatica Developer Tool Guide.

Generate Relational and Flat File Metadata at Run Time


Effective in version 10.0, you can create mappings with dynamic sources and targets that allow metadata
changes to the data sources. When you configure a source or target to be dynamic, the Data Integration
Service can interpret metadata changes to relational and flat file data sources at run time.

The Data Integration Service can perform the following functions:

• Read data from sources where the order of the columns in the source is different from that of the
columns in the physical data object.
• Read data from additional columns in sources that are not present in the physical data object.
• Ignore data for columns that are present in the physical data object but not in the source.

For relational data sources, the Data Integration Service directly fetches the metadata changes from the
database schema.

For flat file data sources, you must configure the flat file data object for the Data Integration Service to fetch
the metadata changes from the data file header, a control file, or automatically from the columns in the data
source. Configure the Generate Run-time Column Names property on the Advanced tab of the flat file data
object.

202 Chapter 17: New Features (10.0)


When you develop a mapping, configure the Read and Write transformations to get data object columns
directly from the data sources at run time. You can also configure the Lookup transformations to get data
object columns directly from the lookup sources. Select At run time, get data object columns from data
source on the Data Object tab of the transformation.

For more information, see the "Dynamic Mappings" chapter in the Informatica 10.0 Developer Mapping Guide.

Import from PowerCenter


Effective in version 10.0, you can import the following PowerCenter transformations into the Developer tool:

• Normalizer transformation
• Sequence Generator transformation
• Update Strategy transformation

For more information, see the Informatica 10.0 Developer Mapping Guide.

Monitoring Tool
Effective in version 10.0, the Monitoring tool has the following new features:

Execution Statistics view

Contains the Navigator and views that were in the Monitoring tool in version 9.6.1.

Summary Statistics view

Displays resource usage, object distribution, and object states for a selected time range.

Views on the Execution Statistics view

You can view additional information about ad hoc mapping jobs, deployed mapping jobs, and mapping
objects in workflows in the Execution Statistics view. When you select one of these objects in the
contents panel, the details panel displays the following new views:

• Summary Statistics view. Displays throughput and resource usage information for the source and
target.
The following image shows the Summary Statistics view for a mapping job:

Informatica Developer 203


• Detailed Statistics view. Displays graphs of throughput and resource usage information for the
source and target. Appears for jobs that run in separate local processes for longer than one minute.
The following image shows the Detailed Statistics view for a mapping job in a workflow:

For more information, see the "Viewing Data" chapter in the Informatica 10.0 Developer Tool Guide.

Object Versioning
Effective in version 10.0, when the Model repository is integrated with a version control system, the version
control system protects objects from being overwritten by other members of the development team. You can
check objects out and in, undo the checkout of objects, and view and restore historical versions of objects.

The Developer tool depicts a versioned Model repository with a white icon decorated with a green check
mark.

The following image shows two connected repositories: MRS1, which has been integrated with a version
control system, and MRS2, which has not:

For more information, see the "Model Repository" chapter in the Informatica 10.0 Developer Tool Guide.

204 Chapter 17: New Features (10.0)


Physical Data Objects in an Application
Effective in version 10.0, you can add a physical data object to an application.

For more information, see the "Application Deployment" chapter in the Informatica 10.0 Developer Tool Guide.

Profiles
This section describes new Developer tool features for profiles and profile results.

Columns Profiles with JSON and XML Data Sources


Effective in version 10.0, you can use the following methods to create a column profile with JSON and XML
data sources:

• Flat File. In this method, you need to create a text file, and add the JSON or XML file source location into
the file. Create a flat file data object with the text file. Create a column profile on the flat file data object.
• Complex file reader. In this method, you create a complex file data object on the JSON or XML source file,
and create a column profile with the complex file data object.
• JSON or XML file in HDFS. In this method, you need to create a connection with HDFS, and create a
complex file data object on the JSON or XML file in HDFS. You can create a column profile with the
complex file data object.
• JSON or XML files in a folder. In this method, you need to consolidate all the JSON or XML files into a
folder. Create a connection with HDFS, and create a complex file data object with the folder. You can
create a column profile on the complex file data object.
For more information about column profiles with JSON and XML data sources, see the "Data Object Profiles"
chapter in the Informatica 10.0 Data Discovery Guide.

Decimal Data Type


Effective in version 10.0, you can create profiles with columns that have the Decimal data type with a
precision of up to 38 digits.

For more information, see the Informatica 10.0 Data Discovery Guide.

Foreign Key Curation


Effective in version 10.0, when you reject an inferred column relationship, all the associated relationships are
also rejected.

For more information about curation, see the "Enterprise Discovery Results" chapter in the Informatica 10.0
Data Discovery Guide.

JDBC Connectivity
Effective in version 10.0, you can specify a JDBC connection as a profiling warehouse connection for IBM
DB2 UDB, Microsoft SQL Server, and Oracle database types. You can create column profiles, rule profiles,
domain discovery, and scorecards with a JDBC connection.

For more information, see the Informatica 10.0 Installation and Configuration Guide.

Object Versioning
Effective in version 10.0, when the Model repository is integrated with a version control system, the version
control system protects objects from being overwritten by other members of the development team. You can
check profiles out and in, undo the checkout of profiles, and view and restore historical versions of profiles.

For more information about object versioning, see the "Informatica Developer Profiles" chapter in the
Informatica 10.0 Data Discovery Guide.

Informatica Developer 205


Informatica Development Platform
This section describes new features and enhancements to the Informatica Development Platform.

Informatica Connector Toolkit


Effective in version 10.0, you can use the following features in the Informatica Connector Toolkit:
Java data types

You can map the native data types to Java data types. When you map the native data type, select the
best Java data type to read from the data source and select the best native data type to write to the
target database or application.

Multiple native metadata objects

You can define multiple native metadata definitions for an adapter. For example, you can create different
native metadata objects for tables, views, and synonyms in a relational data source.

Sort and select

You can define Sort statement support for an adapter to retrieve data from the data source in a specific
order. You can define whether the adapter supports Select statement when the adapter reads from the
data source. You can use the Informatica Connector Toolkit to define the following Select statements for
an adapter:

• Select All
• Select Any
• Select Distinct
• Select First Row
• Select Last Row

Partition

You can specify the partition type and implement the partition logic to use when the adapter reads or
writes data.

You can specify one of the following partition types or all the partition types for an adapter:

• Dynamic. The Data Integration Service determines the number of partitions at run time based on the
partition information from the data source.
• Static. The Data Integration Service determines partitioning logic based on the partition information
that the user specifies, such as the number of partitions or key range partitioning.

Parameterization

You can specify whether the read and write capability attributes of a native metadata object support full
parameterization or partial parameterization. The read and write capability attributes of the native
metadata object can be assigned values or parameters at run time.

Pre and Post data operation

You can implement pre and post tasks that can be run before or after a read or write operation. For
example, you can implement the functionality to truncate a target table before a write operation.

Messages

You can create messages to handle exceptions that occur during the design time or run time of the
adapter. You can use the Message wizard to add, edit, or delete messages. You can localize the
message files if required.

206 Chapter 17: New Features (10.0)


C run time

You can implement the run-time behavior of the adapter in C. You can write code to define how the
adapter reads from and writes to the data source in C.

Reject files

You can implement support for reject files to handle data rejected by the target.

For more information, see the Informatica Development Platform 10.0 Informatica Connector Toolkit
Developer Guide.

Mappings
This section describes new mapping features in version 10.0.

Informatica Mappings
This section describes new mapping features in version 10.0.

Dynamic Mappings
Effective in version 10.0, you can configure dynamic mappings to change sources, targets, and
transformation logic at run time based on parameters and rules that you define. You can determine which
ports a transformation receives, which ports to use in the transformation logic, and which links to establish
between transformation groups. Dynamic mappings enable you to manage frequent metadata changes to the
data sources or to reuse the mapping logic for different data sources with different schemas.

Dynamic mappings include the following features that you can configure:

• Dynamic sources allow changes to the metadata in flat file and relational sources at run time. When the
metadata in a flat file or relational source changes, Read and Lookup transformations can get data object
columns directly from the dynamic sources at run time.
• Transformations can include dynamic ports, which receive one or more columns that can change based
on the rules that you define. You can define rules to include or exclude columns in a dynamic port.
The following transformations can include dynamic ports:
- Aggregator

- Expression

- Filter

- Joiner

- Lookup

- Rank

- Router

- Sequence Generator

- Sorter

- Update Strategy
• You can define a port selector in the Joiner transformation, in the Lookup transformation, and in the
Expression transformation. A port selector is an ordered list of ports that you can reference in the

Mappings 207
transformation logic. Configure a port selector to filter the ports that flow into the transformation and to
reference the ports in a join condition, a lookup condition, or a dynamic expression.
• You can define a dynamic expression in an Expression transformation. A dynamic expression returns
results to a dynamic output port. You can reference a port selector or a dynamic port in a dynamic
expression. When you reference a dynamic port or a port selector, the dynamic expression runs one time
for each port in the dynamic port or the port selector. The Expression transformation generates a separate
output port for each expression instance.
• Dynamic targets allow you to define the columns for flat file and relational targets at run time. Write
transformations can generate columns for the targets at run time based on an associated data object or
the mapping flow. Write transformations that represent relational targets can also create or replace tables
at run time.
• Transformations can have links between groups that determine which ports to connect at run time based
on a policy or a parameter.
• Sources and targets, rules for ports, and transformation properties can change at run time based on
parameters.

For more information about dynamic mappings, see the "Dynamic Mappings" chapter in the Informatica 10.0
Developer Mapping Guide.

Mapping Outputs
Effective in version 10.0, you can create mapping outputs that return aggregated values from the mapping
run. Mapping outputs are the result of aggregating a field value or an expression from each row that a
mapping processes.

For example, you can configure a mapping output to summarize the total amount of an order field from the
source rows that the transformation receives. You can persist a mapping output value in the repository. You
can assign a persisted mapping output value to the Mapping task input parameter. You can also assign
mapping outputs to workflow variables.

Create a mapping output in the mapping Outputs view. Define the expression to aggregate in an Expression
transformation in the mapping.

For more information, see the Informatica 10.0 Developer Mapping Guide.

Mapping Task Input


Effective in version 10.0, you can assign persisted mapping outputs to input parameters of the same
Mapping task. Persisted mapping outputs are mapping outputs that the Data Integration Service saved in the
repository from a previous workflow run. For example, you might choose to persist the latest order date from
a previous workflow run. In the Mapping task Input view, you can assign the persisted value to an input
parameter. You might include the input parameter in a filter expression to skip rows with order dates that are
less than the last date.

For more information, see the Mapping Tasks chapter in the Informatica 10.0 Developer Workflow Guide.

Mapping Task Output


Effective in version 10.0, you can assign mapping outputs to workflow variables. You can assign current user-
defined mapping outputs and persisted user-defined mapping outputs to workflow variables. The current
value is a value that the Mapping task generated in the workflow that is running. The persisted mapping
output is a value that is in the repository from a previous run. You can also assign system-defined mapping
outputs to workflow variables. Assign mapping outputs to workflow variables in the Mapping task Output
view.

208 Chapter 17: New Features (10.0)


For more information, see the Mapping Tasks chapter in the Informatica 10.0 Developer Workflow Guide.

Optimization Methods
Effective in version 10.0, Informatica has the following new features for optimization methods:

Global predicate optimization method

The Data Integration Service can apply the global predicate optimization method. When the Data
Integration Service applies the global predicate optimization method, it splits, moves, removes, or
simplifies the filters in a mapping. The Data Integration Service filters data as close to the source as
possible in the pipeline. It also infers the predicate expressions that a mapping generates.

For more information, see the "Mapping Optimization" chapter in the Informatica 10.0 Performance
Tuning Guide.

Pushdown optimization method

You must select a pushdown type to push transformation logic to the source database. You can choose
to push down none of the transformation logic, partial transformation logic, or full transformation logic
to the source database. You can also view the mapping optimization plan for the pushdown type.

If the mapping has an Update Strategy transformation, you must determine pushdown compatibility for
the mapping before you configure pushdown optimization.

For more information, see the "Pushdown Optimization" chapter in the Informatica 10.0 Developer
Mapping Guide.

Dataship-join optimization method

If a mapping requires data in two different sized tables in different databases to be joined, the Data
Integration Service can apply the dataship-join optimization method.

For more information, see the "Mapping Optimization" chapter in the Informatica 10.0 Performance
Tuning Guide.

Mapping Optimization Plan

You can view how optimization methods affect mapping performance in a mapping optimization plan.

For more information, see the "Mapping Optimization" chapter in the Informatica 10.0 Performance
Tuning Guide.

Parameters
Effective in version 10.0, Informatica has the following new features for parameters:

Parameter usage

You can use parameters to represent additional properties such as connections, SQL statements, sort
and group-by port lists, expression variables, and run time environment.

Parameter types

You can use the following parameter types for dynamic mappings: expression, input link set, port, port
list, resource, and sort list.

Binding parameters between mappings, mapplets, and transformations

You can bind mapping parameters to mapplet parameters or to transformation parameters in the
Instance Value column of a Parameters tab. You can also bind mapplet parameters to transformation
parameters.

Mappings 209
When you bind a parameter to another parameter, the parameter overrides the other parameter at run
time. You can create a mapping or a mapplet parameter from an existing parameter and bind the
parameters in one step. Click the Expose as Mapping Parameter option or the Expose as Mapplet
Parameter option for the parameter you want to override.

You can bind parameters from a mapping to parameters in a Read or Write logical data object mapping.

Parameter sets

You can define a parameter set for a workflow or mapping. A parameter set is an object in the Model
repository that contains a set of parameters and parameter values to use at run time. You use a
parameter set with a mapping, Mapping task, or workflow. You can add one or more parameter sets to
an application when you deploy the application. You can add a parameter set to multiple applications
and deploy them.

Run-time environment parameter

You can set the run-time environment with a parameter. Configure a string parameter at the mapping
level. Set the default value to Native or Hadoop. When you select the run-time environment for the
mapping, click Assign Parameter and select the parameter that you configured.

For more information about parameters, see the Mapping Parameters chapter in the Informatica 10.0
Developer Mapping Guide.

Partitioned Mappings
Effective in version 10.0, Informatica has the following new features for partitioned mappings:

Partitioned transformations

Additional transformations support partitioning. When a mapping enabled for partitioning contains the
following transformations, the Data Integration Service can use multiple threads to transform the data:

• Address Validator
• Case Converter
• Classifier
• Comparison
• Data Masking
• Data Processor
• Decision
• Key Generator
• Labeler
• Match, when configured for identity match analysis
• Merge
• Normalizer
• Parser
• Sequence Generator
• Sorter
• Standardizer
• Weighted Average

210 Chapter 17: New Features (10.0)


Cache partitioning

For an Aggregator, Joiner, or Rank transformation, you can configure multiple cache directories to
optimize performance during cache partitioning for the transformation. You can use the default CacheDir
system parameter value if an administrator configured multiple cache directories for the Data Integration
Service. Or, you can override the default CacheDir system parameter value to configure multiple cache
directories specific to the transformation.

For a Sorter transformation, you can configure multiple work directories to optimize performance during
cache partitioning for the transformation. You can use the default TempDir system parameter value if an
administrator configured multiple temporary directories for the Data Integration Service. Or, you can
override the default TempDir system parameter value to configure multiple directories specific to the
transformation.

Mappings that order data

The Data Integration Service can create partitions for a mapping that establishes a sort order. You can
establish sort order in a mapping with a sorted flat file source, a sorted relational source, or a Sorter
transformation. When the Data Integration Service adds a partition point to a mapping, it might
redistribute data and lose the order established earlier in the mapping. To maintain order in a partitioned
mapping, you must specify that Expression, Java, Sequence Generator, SQL, and Write transformations
maintain the row order in the transformation advanced properties.

Partitioned flat file targets

To optimize performance when multiple threads write to a flat file target, you can configure multiple
output file directories for a flat file data object. You can use the default TargetDir system parameter
value if an administrator has configured multiple target directories for the Data Integration Service. Or,
you can override the default TargetDir system parameter value to configure multiple output file
directories specific to the flat file data object.

Suggested parallelism value for transformations

If you override the maximum parallelism for a mapping, you can define a suggested parallelism value for
a specific transformation. The Data Integration Service uses the suggested parallelism value for the
number of threads for that transformation pipeline stage as long as the transformation can be
partitioned. You can define a suggested parallelism value that is less than the maximum parallelism
value defined for the mapping or the Data Integration Service. You might want to define a suggested
parallelism value to optimize performance for a transformation that contains many ports or performs
complicated calculations.

For more information about partitioned mappings, see the "Partitioned Mappings" chapter in the Informatica
10.0 Developer Mapping Guide.

Run-time Properties
Effective in version 10.0, you can configure the following run-time properties for a mapping:

Stop on Errors

Stops the mapping if a nonfatal error occurs in the reader, writer, or transformation threads. Default is
disabled.

Target Commit Interval

The number of rows to use as a basis for a commit. The Data Integration Service commits data based on
the number of target rows that it processes and the constraints on the target table.

For more information, see the Informatica 10.0 Developer Mapping Guide.

Mappings 211
Target Load Order Constraints
Effective in version 10.0, you can configure constraints to control the order in which rows are loaded and
committed across target instances in a mapping. Define constraints on the Load Order tab of the mapping
Properties view. Each constraint consists of a primary target name and a secondary target name to restrict
the load order.

For more information, see the Informatica 10.0 Developer Mapping Guide.

Metadata Manager
This section describes new Metadata Manager features in version 10.0.

Tableau Resources
Effective in version 10.0, you can create and configure a Tableau resource to extract metadata from Tableau
Server.

For more information about creating and configuring Tableau resources, see the "Business Intelligence
Resources" chapter in the Informatica 10.0 Metadata Manager Administrator Guide.

For more information about supported metadata source versions, see the PCAE Metadata Manager XConnect
Support Product Availability Matrix on Informatica Network:
[Link]

Data Lineage Enhancements


Effective in version 10.0, data lineage diagrams have the following enhancements:

Summary lineage for PowerCenter mappings

When you view a data lineage diagram that includes a PowerCenter mapping, Metadata Manager
displays a summarized view of the mapping by default. The summary view displays mapping inputs and
outputs in the data lineage diagram but hides the transformation logic. The summary view reduces the
complexity of the data lineage diagram. It also reduces the amount of time it takes for Metadata
Manager to generate the data lineage diagram.

To view all of the transformation logic in a mapping, click Switch to Detail on the data lineage diagram
toolbar. The following image shows the Switch to Detail button:

To switch from the detail view to back to the summary view, refresh the diagram.

Filter objects

You can filter the objects that appear in a data lineage diagram. You can filter individual objects or all
objects of a particular class. For example, you might want to remove all business terms from a data
lineage diagram. You can remove any filter that you apply.

Improved performance

Metadata Manager uses a file-based graph database for storing and retrieving data lineage linking
information. As a result, Metadata Manager generates data lineage diagrams more quickly than it did in
previous versions.

212 Chapter 17: New Features (10.0)


When you upgrade to version 10.0, the upgrade process creates the graph database and copies data
lineage linking information from the Metadata Manager repository to the graph database. You can
configure the location that Metadata Manager uses to store the graph database files.

Cancel creation of a diagram

If Metadata Manager takes a long time to generate a data lineage diagram, you can cancel creation of
the diagram.

For more information about data lineage diagrams, see the "Working with Data Lineage" chapter in the
Informatica 10.0 Metadata Manager User Guide. For more information about configuring the Metadata
Manager lineage graph location, see the "Metadata Manager Service" chapter in the Informatica 10.0
Application Service Guide.

Metadata Catalog Views


Effective in version 10.0, the metadata catalog contains two different views for browsing metadata: the List
view and the Tree view. Use the List view to drill-down through resources, logical groups, and metadata
objects individually. Use the Tree view to display metadata objects in a hierarchy.

For more information about the metadata catalog views, see the "Viewing Metadata" chapter in the
Informatica 10.0 Metadata Manager User Guide.

Impala Queries in Cloudera Navigator Resources


Effective in version 10.0, Metadata Manager can extract Impala query templates and query executions from a
Cloudera Hadoop cluster.

For more information about Impala queries in Cloudera Navigator resources, see the "Database Management
Resources" chapter in the Informatica 10.0 Metadata Manager Administrator Guide.

Parameters in Informatica Platform Resources


Effective in version 10.0, Informatica Platform resources can extract metadata for mappings that use
mapping parameters.

If an Informatica Platform 10.x application includes a mapping that uses parameters, you can configure
Metadata Manager to use the parameter values from a parameter set. You assign a parameter set to a
mapping when you create an Informatica Platform resource. Metadata Manager uses the parameter values to
display the mapping objects and to display data lineage.

For more information about Informatica Platform resources, see the "Data Integration Resources" chapter in
the Informatica 10.0 Metadata Manager Administrator Guide.

Recent History
Effective in version 10.0, Metadata Manager maintains a history of the objects that you view in the metadata
catalog. Use the recent history to quickly return to an object that you previously viewed. Metadata Manager
clears the recent history when you log out.

For more information, see the "Viewing Metadata" chapter in the Informatica 10.0 Metadata Manager User
Guide.

Metadata Manager 213


Related Catalog Objects and Impact Summary Filter and Sort
Effective in version 10.0, when you view details for a metadata object or business term, you can filter and sort
the related catalog objects and the impact summary. You can filter and sort by object class, object name, or
path. You can also filter the impact summary by metadata source type.

For more information, see the "Viewing Metadata" chapter in the Informatica 10.0 Metadata Manager User
Guide.

Session Task Instances in the Impact Summary


Effective in version 10.0, the impact summary lists PowerCenter Session task instances. The impact
summary lists a Session task instance when you view metadata details for an object that impacts or is
impacted by a PowerCenter mapping. When you export the metadata object and include the impact summary,
the export file also lists the associated Session task instance in the Impact Summary section.

The impact summary lists the Session task instance because it can affect the data flow. A Session task
instance can override source or target connection information. It can also contain an SQL query that
overrides the default query used to extract data from the source.

For more information about the impact summary, see the "Viewing Metadata" chapter in the Informatica 10.0
Metadata Manager User Guide.

Application and Data Lineage Properties


Effective in version 10.0, you can configure new application and data lineage properties in the Metadata
Manager [Link] file.

The following table describes new Metadata Manager application properties in [Link]:

Property Description

[Link] Maximum number of errors that the Metadata Manager Service can
encounter before the custom resource load fails.

[Link] Number of errors that the Metadata Manager Service writes to the in
memory cache and to the [Link] file in one batch when you load a custom
resource.

The following table describes new data lineage properties in [Link]:

Property Description

[Link] Maximum number of graph elements, including edges and


vertices, that the Metadata Manager Service can process in a
single transaction during lineage graph creation.

[Link] Number of records that the Metadata Manager Service


processes in one block when it retrieves data lineage linking
information from the Metadata Manager warehouse to populate
the graph database.

For more information about the [Link] file, see the "Metadata Manager Properties Files" appendix in
the Informatica 10.0 Metadata Manager Administrator Guide.

214 Chapter 17: New Features (10.0)


PowerCenter
This section describes new PowerCenter features in version 10.0.

High Availability
Effective in version 10.0, you can enable the PowerCenter Integration Service and PowerCenter client to read
from and write to a Hadoop cluster that uses a highly available NameNode.

For more information, see the "PowerExchange for Hadoop Configuration" chapter in the Informatica 10.0
PowerExchange for Hadoop User Guide for PowerCenter

PowerExchange Adapters
This section describes new PowerExchange adapter features in version 10.0.

PowerExchange Adapters for Informatica


This section describes new Informatica adapter features in version 10.0.

PowerExchange for DataSift


Effective in version 10.0, you can parameterize the DataSift data object read operation properties.

For more information, see the Informatica PowerExchange for DataSift 10.0 User Guide.

PowerExchange for Facebook


Effective in version 10.0, you can parameterize the Facebook data object read operation properties.

For more information, see the Informatica PowerExchange for Facebook 10.0 User Guide.

PowerExchange for Greenplum


Effective in version 10.0, you can perform the following tasks with PowerExchange for Greenplum:

• You can configure dynamic partitioning for Greenplum data objects. You can configure the partition
information so that the Data Integration Service determines the number of partitions to create at run time.
• You can parameterize Greenplum data object operation properties to override the write data object
operation properties during run time.
• You can use the Max_Line_Length integer to specify the maximum length of a line in the XML
transformation data that is passed to gpload.

For more information, see the Informatica PowerExchange for Greenplum 10.0 User Guide.

PowerExchange for HBase


Effective in version 10.0, you can parameterize the HBase data object read and write operation properties.

For more information, see the Informatica PowerExchange for HBase 10.0 User Guide.

PowerExchange for HDFS


Effective in version 10.0, you can parameterize the complex file data object read and write operation
properties.

For more information, see the Informatica PowerExchange for HDFS 10.0 User Guide.

PowerCenter 215
PowerExchange for LinkedIn
Effective in version 10.0, you can parameterize the LinkedIn data object read operation properties.

For more information, see the Informatica PowerExchange for LinkedIn 10.0 User Guide.

PowerExchange for SAP NetWeaver


Effective in version 10.0, you can perform the following tasks with PowerExchange for SAP NetWeaver:

• You can use the Developer tool to create an SAP Table data object and a data object read operation. You
can then add the read operation as a source or lookup in a mapping, and run the mapping to read or look
up data from SAP tables.
• When you read data from SAP tables, you can configure key range partitioning. You can also use
parameters to change the connection and Table data object read operation properties at run time.
• You can run a profile against SAP Table data objects.
• When you create an SQL Data Service, you can add an SAP Table data object read operation as a virtual
table.
• You can read data from the SAP BW system through an open hub destination or InfoSpoke.
• When you read data from the SAP BW system, you can configure dynamic or fixed partitioning. You can
also use parameters to change the connection and BW OHS Extract data object read operation properties
at run time.
• You can write data to the SAP BW system. You can use a 3.x data source or a 7.x data source to write
data to the SAP BW system.
• When you write data to the SAP BW system, you can configure dynamic partitioning. You can also use
parameters to change the connection and BW Load data object write operation properties at run time.
• You can create an SAP connection in the Administrator tool.
• When you use the Developer tool to read data from or write data to SAP BW, you can create an SAP BW
Service in the Administrator tool.
For more information, see the Informatica PowerExchange for SAP NetWeaver 10.0 User Guide.

PowerExchange for Teradata Parallel Transporter API


Effective in version 10.0, you can perform the following tasks with PowerExchange for Teradata Parallel
Transporter API:

• You can use PowerExchange for Teradata Parallel Transporter API to read large volumes of data from
Teradata tables.
• You can use the Update system operator to perform insert, update, upsert, and delete operations against
Teradata database tables.
• You can use the Secure Sockets Layer (SSL) protocol to configure a secure connection between the
Developer tool and the Teradata database.
• You can configure dynamic partitioning for Teradata Parallel Transporter API data objects. You can
configure the partition information so that the Data Integration Service determines the number of
partitions to create at run time.
• You can parameterize Teradata data object operation properties to override the read and write data object
operation properties during run time.

For more information, see the Informatica PowerExchange for Teradata Parallel Transporter API 10.0 User
Guide.

216 Chapter 17: New Features (10.0)


PowerExchange for Twitter
Effective in version 10.0, you can parameterize the read operation properties for Twitter and Twitter
Streaming data objects.

For more information, see the Informatica PowerExchange for Twitter 10.0 User Guide.

PowerExchange for Web Content-Kapow Katalyst


Effective in version 10.0, you can parameterize the Web Content-Kapow Katalyst data object read operation
properties.

For more information, see the Informatica PowerExchange for Web Content-Kapow Katalyst 10.0 User Guide.

Reference Data
This section describes new reference data features in version 10.0.

Classifier Models
Effective in version 10.0, you can perform the following actions in a classifier model in the Developer tool:

• Import reference data values and label values to a classifier model from a data source.
• Select the configurable options from a ribbon in the classifier model. For example, select the Manage
Labels option to access the options to add, delete, or update the label values in a classifier model.
• Use wildcard characters in the search filter in a classifier model.
• Add a single row of data to a classifier model.
• Apply a label value to multiple rows of classifier model data in a single operation.
For more information, see the "Classifier Models" chapter in the Informatica 10.0 Reference Data Guide.

Probabilistic Models
Effective in version 10.0, you can perform the following actions in a probabilistic model in the Developer tool:

• Assign a label to multiple reference data values in a single operation.


• Import label values and reference data values from a data source to a probabilistic model.
• View the current number of reference data values that use a label that you select.
Effective in version 10.0, the Developer tool displays the data rows in a probabilistic model on one or more
pages. A page contains 100 reference data rows. You can move to the next page or the previous page in the
model, and you can move to a page number that you specify.

For more information, see the "Probabilistic Models" chapter in the Informatica 10.0 Reference Data Guide.

Reference Data 217


Rule Specifications
This section describes new features in rule specifications in version 10.0.

Linked Assets
Effective in version 10.0, the Design workspace in the Analyst tool displays a hyperlink to an asset that you
link to the rule specification. For example, if you use another rule asset in the rule specification, the
workspace displays a link to the rule asset. The Design workspace also displays a hyperlink to any rule that
you generate from the rule specification.

Find the hyperlinks under Assets in the rule specification properties.

For more information, see the "Rule Specification Configuration" chapter of the Informatica 10.0 Rule
Specification Guide.

Mapplet Rules
Effective in version 10.0, you can use mapplet rules in the following ways:

• You can configure a rule specification that is valid during a time period that you define. You specify the
dates and times that indicate the start and the end of the time period. The time period also applies to any
mapplet rule that you compile from the rule specification. If you run a mapping that reads the mapplet rule
outside the time period, the mapping fails.
For more information, see the "Rule Specification Configuration" chapter of the Informatica 10.0 Rule
Specification Guide.
• You can add a mapplet rule to a condition and an action in a rule statement. Connect an input from the
rule specification to an input port on the mapplet rule. Or, use a constant value as an input to the mapplet
rule. Select an output port from the mapplet rule as output from the condition or the action.
For more information, see the "Rule Specification Configuration" chapter of the Informatica 10.0 Rule
Specification Guide.

Rule Statements
Effective in version 10.0, you can perform the following operations in a rule statement:

• You can move or copy a rule statement within a rule set, and you can move or copy a rule statement to
another rule set. You can move or copy a rule statement to a rule set in another rule specification. If you
move or copy a rule statement to another rule specification, the operation moves or copies the inputs that
the rule statement uses. The operation also moves or copies any test data that you entered and saved to
test the rule statement.
• You can move or copy a rule set to another location in the rule specification and to another rule
specification. If you move or copy a rule set to another rule specification, the operation moves or copies
the inputs and the test data that the rule set uses.
• You can move or copy test data from a rule specification to another rule specification.
• You can select the CONTAINS operator when you configure a condition in a rule statement. Use the
operator to determine the following information about the data values in an input column:
- Determine if an input column contains a data value that you enter.

- Determine if an input column contains a data value that appears on the same row in another input
column.
• You can configure a rule statement to search for an input value in a list of values that you enter.
• A rule set includes a predefined rule statement that specifies an action to perform when the preceding
rule statements generate no data. By default, the rule statement specifies that the rule set performs no
action. You can update the action in the rule statement.

218 Chapter 17: New Features (10.0)


For more information, see the "Rule Statement Configuration" in the Informatica 10.0 Rule Specification Guide.

User Interface Enhancements


Effective in version 10.0, the Design workspace includes the following user interface enhancements for rule
specifications:

• When you select the Inputs view for a rule set, the workspace hides any input that the rule set does not
contain.
• You can drag the rule specification in the workspace canvas.
• You can use the mouse wheel to zoom in and zoom out of the rule specification.
• You can expand and collapse the rule specification tree structure to show or hide different parts of the
rule specification.
• You can add a text description to an input.
• A rule set that reads the output of a child rule set displays the child rule set name in the list of inputs.
• A rule set that is not valid appears in a different color to a valid rule set.
• Some configurable options have new names.
For more information, see the Informatica 10.0 Rule Specification Guide.

Version Control
Effective in version 10.0, you can work with rule specifications in a versioned Model repository. If you open a
rule specification from a Model repository that uses version control, the Analyst tool applies the version
control properties to the rule specification. Use the Edit option in the Design workspace to check out a rule
specification from the repository. Use the Save and Finish option in the workspace to check in the rule
specification. You can also undo a checkout operation.

You can view an earlier version of the rule specification and revert to an earlier version in edit mode and in
read-only mode. When you view an older version of a rule specification in read-only mode, you can perform all
of the read-only operations that apply to the current version of the rule specification. You can view and
validate a rule specification in read-only mode. You can test a rule specification in read-only mode if the rule
specification contains test data.

For more information, see the "Model Repository" chapter in the Informatica 10.0 Analyst Guide.

Security
This section describes new security features in version 10.0.

Groups
Effective in version 10.0, Informatica includes a default group named Operator. Use the Operator group to
manage multiple users who are assigned the Operator role.

For more information, see the Informatica 10.0 Security Guide.

Privileges
Effective in version 10.0, Informatica includes the following new privileges:

Security 219
Model Repository Service privilege

The Manage Team-based Development privilege allows Model repository administrators to perform
actions related to object lock management and versioned object management.

Scheduler Service privileges

The Scheduler privilege group determines the actions that users can perform on schedules and
scheduled jobs.

For more information, see the "Command Line Privileges and Permissions" appendix in the Informatica 10.0
Security Guide.

Roles
Effective in version 10.0, Informatica includes a custom role named Operator. The Operator role includes
privileges for managing, scheduling, and monitoring application services.

For more information, see the Informatica 10.0 Security Guide.

Transformation Language Functions


This section describes new features of transformation language functions in version 10.0.

Informatica Functions
This section describes new features of Informatica functions in version 10.0.

CaseFlag
Effective in version 10.0, the CaseFlag option does not support NULL values for the following functions:
GREATEST, LEAST, IN, and INDEXOF.

Previously, the CaseFlag option supported NULL values.

For more information, see the "Functions" chapter in the Informatica 10.0 Developer Transformation Language
Reference.

TO_DECIMAL38 Function
Effective in version 10.0, you can use the TO_DECIMAL38 function to convert a string or numeric value to a
decimal value. The function returns a decimal value of precision and scale between 0 and 38, inclusive.

For more information, see the Informatica 10.0 Transformation Language Reference.

Transformations
This section describes new transformation features in version 10.0.

220 Chapter 17: New Features (10.0)


Informatica Transformations
This section describes new features in Informatica transformation in version 10.0.

Address Validator Transformation


Effective in version 10.0, you can define parameters to set the following transformation properties:

• Geocode data type


• Global Max Field Length
• Max Result Count
• Optimization Level
• Standardize Invalid Address
For more information, see the "Address Validator Transformation" chapter in the Informatica 10.0 Developer
Transformation Guide.

Bad Record Exception Transformation


Effective in version 10.0, you can use parameters to specify the upper threshold and the lower threshold that
the transformation uses to identify bad records.

For more information, see the "Mapping Parameters" chapter of the Informatica 10.0 Developer Mapping
Guide.

Data Processor Transformation


This section describes new Data Processor transformation features.

Data Transformation Libraries


Data Transformation libraries contain predefined transformation components for a range of industry
messaging standards. The Data Processor transformation uses a Library object to transform an industry
messaging type input into a different format, such as an XML output document, or from an XML input to an
industry message output.

The Library object contains many objects and components, such as Parsers, Serializers, and XML schemas,
preset to transform the industry standard input and specific application messages into XML or other output.
Some libraries contain additional objects for message validation, acknowledgments, and diagnostic displays.
You can also customize the properties and validation settings of the Library object.

You can create Library objects for the DTCC-NTCC, EDIFACT, EDI-X12, HIPAA, HL7, and SWIFT libraries.

For more information, see the Informatica Data Transformation 10.0 User Guide and the Informatica Data
Transformation 10.0 Libraries Guide.

Complex File Reader without a Streamer


You can use the Complex File Reader without a Streamer as the start-up component in a Data Processor
transformation that receives the input.

For more information, see the Informatica Data Transformation 10.0 User Guide.

Pass-Through Ports with Custom Data Types


Data Processor transformations can include pass-through ports with custom data types.

For more information about custom data types, see the Informatica Developer 10.0 User Guide.

Transformations 221
RunMapplet Statement for XMap
You can define a RunMapplet mapping statement to call a mapplet from an XMap in a Data Processor
transformation. One or more MappletInput and MappletOutput statements can be nested under the
RunMapplet statement. Values are mapped to the mapplet input ports in the same order that they are listed in
the MappletInput statements. The values in the mapplet outlet ports are mapped to the MappletOutput
statement in the same order that they are listed in the mapplet ports.

For more information, see the Informatica Data Transformation 10.0 User Guide.

Script Mode Editing


You can edit a Script for the Data Processor transformation with an external editor. For example, you can
perform a global find and replace operation with an external editor.

For more information, see the Informatica Data Transformation 10.0 User Guide.

Decision Transformation
Effective in version 10.0, you can use parameters to specify input values in a Decision transformation script.

For more information, see the "Mapping Parameters" chapter of the Informatica 10.0 Developer Mapping
Guide.

Duplicate Record Exception Transformation


Effective in version 10.0, you can use parameters to specify the upper threshold and the lower threshold that
the transformation uses to identify duplicate records.

For more information, see the "Mapping Parameters" chapter of the Informatica 10.0 Developer Mapping
Guide.

Expression Transformation
This section describes the new features in the Expression transformation.

Dynamic Expressions

Effective in version 10.0, you can create an expression in a dynamic output port. When you create an
expression in a dynamic port, the expression is a dynamic expression. A dynamic expression might
generate more than one output port when the expression contains a port selector or a dynamic port.
When the dynamic expression runs against multiple ports, the expression returns an output value for
each port.

For more information about dynamic expressions, see the Expression Transformations chapter in the
Informatica 10.0 Developer Transformation Guide.

Mapping Outputs

Effective in version 10.0, you can configure mapping outputs. A mapping output is a single value that is
the result of aggregating a field or expression from each row that the mapping processes. For example,
a mapping output can summarize the total amount of an order field from all the source rows that the
transformation receives. A mapping output expression is a field value or an expression to aggregate
from the rows that the Expression transformation receives. You must define a mapping output in the
mapping Properties view, before you can create the corresponding expression in the Expression
transformation.

For more information about mapping outputs, see the Mapping Outputs chapter in the Informatica 10.0
Developer Mapping Guide.

222 Chapter 17: New Features (10.0)


Test Expressions

Effective in version 10.0, you can test expressions that you configure in the Expression Editor. When you
test an expression, you enter sample data and then evaluate the expression.

You can test expressions when you configure expressions in the following ways:

• In an output or variable port in the Expression transformation


• In the Mapping Outputs view of an Expression transformation after adding the transformation to a
mapping

The following image shows the results of an expression that concatenates a sample first name and last
name:

For more information about testing expressions, see the "Expression Transformation" chapter in the
Informatica 10.0 Developer Transformation Guide.

Hierarchical to Relational Transformation


This section describes the Hierarchical to Relational transformation that you create in the Developer tool.

The Hierarchical to Relational transformation is an optimized transformation introduced in version 10.0 that
converts hierarchical input to relational output.

For more information, see the Informatica 10.0 Developer Transformation Guide.

Match Transformation
Match Type Options in Identity Match Analysis
Effective in version 10.0, you can select the following options when you configure the Match transformation
to read a persistent store of identity index data:

Remove IDs from the database

The transformation deletes rows from the index tables if the rows share sequence identifiers with rows
in the mapping source data. The transformation does not perform match analysis when you select the
option.

Update the current IDs in the database

The transformation replaces rows in the index tables with rows from the mapping source data if the
rows share sequence identifiers. The transformation does not add rows to the index. The transformation
can include the rows that it does not add in the match analysis.

Transformations 223
For more information, see the "Match Transformations in Identity Analysis" chapter of the Informatica 10.0
Developer Transformation Guide.

Matching Process Options in Identity Match Analysis


Effective in version 10.0, you can enable and disable match analysis when you configure the transformation
to update a persistent store of identity index data. Use the Matching Process option to enable or disable
match analysis.

For more information, see the "Match Transformations in Identity Analysis" chapter of the Informatica 10.0
Developer Transformation Guide.

Status Codes for Identity Analysis with an Persistent Index Store


Effective in version 10.0, the Match transformation can generate the following status codes to describe the
results of match analysis on a persistent index data store:

Absent

The index data store does not contain data for the current record.

Invalid

The transformation cannot analyze the current record. For example, the transformation cannot generate
index data for the record because the key field on the Match Type tab is not compatible with the record
data.

Removed

The transformation removes the index data for the record from the index data store.

Updated

The transformation updates the rows in the persistent data store with index data from the
transformation input record. The transformation input data and the persistent index data have common
sequence identifiers.

For more information, see the "Match Transformation" chapter of the Informatica 10.0 Developer
Transformation Guide.

Parameter Usage
Effective in version 10.0, you can use parameters to set the following options on the Match transformation:

• The match score threshold value.


• The relative weight that the transformation applies to the scores from each match strategy.
• The persistence method that the transformation applies to the persistent index data store in identity
match analysis.

For more information, see the "Mapping Parameters" chapter of the Informatica 10.0 Developer Mapping
Guide.

Sequence ID Port
Effective in version 10.0, the Match transformation output ports include a Sequence ID port when you
configure the transformation to read a persistent index store. The transformation uses the sequence
identifier values to track the index data through the different stages of the match analysis.

For more information, see the "Match Transformation" chapter of the Informatica 10.0 Developer
Transformation Guide.

224 Chapter 17: New Features (10.0)


SQL Transformation
This section describes new features in the SQL transformation.

Effective in version 10.0, you can parameterize the connection for an SQL transformation. Define the
parameter in the mapping. Then, assign the parameter to the Connection Name in the SQL transformation
run-time properties.

For more information, see the SQL Transformation chapter in the Informatica 10.0 Transformation Guide.

Transformations in Dynamic Mappings


This section describes new features in the transformations for dynamic mappings.

Effective in version 10.0, you can add dynamic ports to some transformations. You can also parameterize
which input ports to link to ports from an upstream transformation. You can configure port selectors to
reference multiple ports in transformation logic.

The transformations contain the following new tabs in the Properties view:

Group By

The Aggregator transformation, the Rank transformation, and the Sorter transformation require that you
configure groups of ports. You can now configure the groups on a Group By tab. You can define groups
by selecting ports or you can configure parameters that contain port lists. The Group By tab provides
flexibility when you configure the transformations with generated ports.

Port Selector

You can reference multiple ports in transformation logic. Define a port selector, which is an ordered list
of ports. You can use reference port selectors in dynamic expressions, join conditions, or lookup
conditions. When you define a port selector, you can include or exclude transformation ports based on
the port name, the port type, or a pattern of text characters.

Run-time Linking

When you configure transformations in a dynamic mapping, you can set parameters or link policies that
determine which ports to link between transformations. Configure run-time linking to link dynamic ports
to static ports. You can configure a link policy to link ports by name. You can configure an InputLinkSet
parameter to specify the names of the of ports to link at run time.

For more information, see the Informatica 10.0 Transformation Guide.

Workflows
This section describes new workflow features in version 10.0.

Informatica Workflows
This section describes new features in Informatica workflows in version 10.0.

Workflows 225
Parallel Execution of Workflow Tasks
Effective in 10.0 Update 1, the Data Integration Service can run tasks on multiple sequence flows in a
workflow in parallel. To create the parallel sequence flows, add Inclusive gateways to the workflow in the
Developer tool.

Use an Inclusive gateway to split a sequence flow into multiple sequence flows. The Data Integration Service
runs the objects on every branch with a sequence flow condition that evaluates to true. The Data Integration
Service runs the objects on each branch concurrently. Use another Inclusive gateway to merge the sequence
flows into a single sequence flow. When the objects on all branches are complete, the Data Integration
Service passes the data from the second Inclusive gateway to the next object in the workflow.

You can add one or more instances of any type of task to a sequence flow between two Inclusive gateways.
You cannot add a Human task or a Voting task to more than one sequence flow between two Inclusive
gateways.

For more information, see the Informatica 10.0 Update 1 Developer Workflow Guide.

Mapping Tasks
Effective in version 10.0, Informatica has the following new features for Mapping tasks:

Mapping task log file directory

You can configure the directory where the Data Integration Service writes the Mapping task log. By
default, the Data Integration Service writes the Mapping task log file in the directory defined by the
system parameter, LogDir. The default location is disLogs/mappingtask. You can configure a different
directory for the Mapping task log file in the Mapping task Advanced properties. You can parameterize
the log file directory.

Mapping task log file name

You can configure a file name for the Mapping task log file. The Data Integration Service appends the file
name to the information in the Masking Task Log File Directory field. It appends the log file name to a
UID and time stamp or to a mapping run number, based on how you choose to save the log file. You can
parameterize the log file name. Configure the log file name in the Mapping task Advanced properties.

Mapping task log save type

You can save the Mapping task log file by timestamp or by the number of mapping task runs. The suffix
of the mapping task log file name reflects the option you select. You can configure how many log files to
save.

Java classpath

You can enter the classpath to add to the beginning of the system classpath when the Data Integration
Service runs the mapping task. Enter a Java classpath in the Advanced properties if you use third-party
Java packages, built-in Java packages, or custom Java packages in a Java transformation.

Mapping task parameter usage

Effective in version 10.0, you can view which objects in a mapping use a specific parameter. Select a
parameter on the Mapping task Input tab, and click Parameter Usage.

Custom properties

You can define custom properties for a Mapping task and configure the property values. You can also
parameterize a custom property.

For more information, see the Informatica 10.0 Developer Workflow Guide.

226 Chapter 17: New Features (10.0)


Chapter 18

Changes (10.0)
This chapter includes the following topics:

• Installation, 227
• Application Services, 228
• Big Data, 234
• Business Glossary, 234
• Command Line Programs, 235
• Domain, 236
• Informatica Administrator, 236
• Informatica Analyst, 238
• Informatica Developer, 240
• Mappings, 242
• Metadata Manager, 244
• PowerCenter, 246
• PowerExchange Adapters, 247
• Reference Data, 249
• Rule Specifications, 250
• Security, 250
• Sources and Targets, 251
• Transformations, 251
• Workflows, 254

Installation
This section describes changes to the Informatica installation in version 10.0.

227
Changed Support
Effective in version 10.0, Informatica implemented the following changes in support that affect upgrade:

Support Change Level of Support Comments

HP-UX Dropped support Migrate to a supported operating system before you


upgrade.

Windows 32-bit Dropped support for application Migrate to a supported operating system before you
services and for the Developer tool upgrade.

zLinux Deferred support Informatica will reinstate support in a future release.

Solaris Deferred support Informatica will reinstate support in a future release.

For more information about product requirements and supported platforms, see the Product Availability
Matrix on Informatica Network:
[Link]

Application Services
This section describes changes to application services in version 10.0.

Analyst Service
This section describes changes to Analyst Service features in version 10.0.

Stop Mode
Effective in version 10.0, the Analyst Service has complete, abort, and stop modes to disable the Analyst
Service. Select the stop mode to stop all jobs, and then disable the Analyst Service.

Previously, only complete and abort modes were available to disable the service.

For more information, see the Analyst Service chapter in theInformatica 10.0 Application Service Guide.

Data Integration Service


This section describes changes to the Data Integration Service in version 10.0.

Email Server
Effective in version 10.0, you can no longer configure an email server for the Data Integration Service. The
email server properties for the Data Integration Service are removed. Scorecard notifications use the email
server configured for the domain. Workflow notifications use the email server configured for the Email
Service. Workflow notifications include emails sent from Human tasks and Notification tasks in workflows.

Previously, scorecard and workflow notifications used the email server configured for the Data Integration
Service.

The upgrade determines the email server to use based on the following notification types:

228 Chapter 18: Changes (10.0)


Scorecard notifications

Scorecard notifications use the email server configured for the domain. If you did not configure SMTP
for the domain in the previous version, the upgraded domain uses the email server configured for the
first Data Integration Service encountered during the upgrade. If you configured SMTP for the domain in
the previous version, the upgraded domain continues to use that email server.

The following email server properties available on the Data Integration Service in previous versions are
not available on the domain. You can no longer configure these properties for scorecard notifications:

• SMTP Server Connection Timeout


• SMTP Server Communication Timeout
• SMTP Authentication Enabled
• Use TLS Security
• Use SSL Security

Before you send scorecard notifications in version 10.0, verify that SMTP is correctly configured for the
domain. To use the same email server configured for the Data Integration Service in previous versions,
record the Data Integration Service values before upgrading.

Workflow notifications

Workflow notifications use the email server configured for the Email Service.

The following email server properties available on the Data Integration Service in previous versions are
not available on the Email Service. You can no longer configure these properties for workflow
notifications:

• SMTP Server Connection Timeout


• SMTP Server Communication Timeout

Before you send workflow notifications in version 10.0, configure an email server for the Email Service,
and then enable the Email Service. To use the same email server configured for the Data Integration
Service in previous versions, record the Data Integration Service values before upgrading.

For more information about configuring SMTP for the domain, see the "Domain Management" chapter in the
Informatica 10.0 Administrator Guide.

For more information about the Email Service, see the "System Services" chapter in the Informatica 10.0
Application Service Guide.

Execution Options
Effective in version 10.0, you configure the following execution options on the Properties view for the Data
Integration Service:

• Maximum Execution Pool Size


• Maximum Memory Size
• Maximum Parallelism
• Hadoop Kerberos Service Principal Name
• Hadoop Kerberos Keytab
• Temporary Directories
• Home Directory
• Cache Directory
• Source Directory

Application Services 229


• Target Directory
• Rejected Files Directory
• Informatica Home Directory on Hadoop
• Hadoop Distribution Directory
• Data Integration Service Hadoop Distribution Directory
When the Data Integration Service is configured to run on primary and back-up nodes or on a grid, you can
override some of the execution options to define different values for each node with the compute role. When
the DTM runs a job on the compute node, the DTM uses the overridden value. You can override the following
options on the Compute view for the Data Integration Service:

• Home Directory
• Temporary Directories
• Cache Directory
• Source Directory
• Target Directory
• Rejected Files Directory
Previously, you configured the execution options on the Processes view for the Data Integration Service. You
could configure the execution options differently for each node where a service process ran.

If you configured the execution options differently for each service process in a previous version, the upgrade
determines the version 10.0 values based on the following situations:

Options without a compute override

If the option defines a maximum integer value, the highest value defined for all processes is used as the
Data Integration Service value on the Properties view. If the option defines a string value, the value
defined for the first node encountered during the upgrade is used as the Data Integration Service value
on the Properties view.

Options with a compute override

The value defined on the Processes view for a node is used as the compute override on the Compute
view for the same node. The value defined for the first node encountered during the upgrade is used as
the Data Integration Service value on the Properties view.
For more information about the execution options, see the "Data Integration Service" chapter in the
Informatica 10.0 Application Service Guide.

Maximum Session Size


Effective in version 10.0, the Data Integration Service process property Maximum Session Size is renamed to
Maximum Memory Per Request. You configure the Maximum Memory Per Request property for the following
Data Integration Service modules:

• Mapping Service Module. Default is 536,870,912 bytes.


• Profiling Service Module. Default is 536,870,912 bytes.
• SQL Service Module. Default is 50,000,000 bytes.
• Web Service Module. Default is 50,000,000 bytes.
Previously, you configured the Maximum Session Size for each Data Integration Service process. All of the
Data Integration Service modules used the same value. The default was 50,000,000 bytes.

The upgraded service uses the version 10.0 default value for each module. If you changed the default value
of Maximum Session Size in a previous version, you must change the value of Maximum Memory Per
Request after you upgrade.

230 Chapter 18: Changes (10.0)


For more information about the Maximum Memory Per Request property, see the "Data Integration Service"
chapter in the Informatica 10.0 Application Service Guide.

Run Jobs in Separate Processes


Effective in version 10.0, the Launch Jobs in Separate Processes property is renamed to the Launch Job
Options property. You can configure one of the following values for the Launch Job Options property:

In the service process

Runs jobs in the Data Integration Service process. Configure when you run SQL data service and web
service jobs on a single node or on a grid where each node has both the service and compute roles. SQL
data service and web service jobs typically achieve better performance when the Data Integration
Service runs jobs in the service process.

In separate local processes

Runs jobs in separate DTM processes on the local node. Configure when you run mapping, profile, and
workflow jobs on a single node or on a grid where each node has both the service and compute roles.
When the Data Integration Service runs jobs in separate local processes, stability increases because an
unexpected interruption to one job does not affect all other jobs.

In separate remote processes

Runs jobs in separate DTM processes on remote nodes. Configure when you run mapping, profile, and
workflow jobs on a grid where nodes have a different combination of roles.

When the Data Integration Service runs jobs in separate remote processes, stability increases because
an unexpected interruption to one job does not affect all other jobs. In addition, you can better use the
resources available on each node in the grid. When a node in a Data Integration Service grid has the
compute role only, the node does not have to run the service process. The machine uses all available
processing power to run mappings.

Previously, you set the Launch Jobs in Separate Processes property to true to run jobs in the Data Integration
Service process. You set the property to false to run jobs in separate DTM processes on the local node.

For more information about running jobs in separate processes, see the "Data Integration Service
Management" chapter in the Informatica 10.0 Application Service Guide.

Workflow and Human Task Configuration


The following Data Integration Service options change in version 10.0:

Workflow Orchestration Service Module replaces Workflow Service Module

Effective in version 10.0, you select the Workflow Orchestration Service Module to enable the Data
Integration Service to run workflows.

Previously, you selected the Workflow Service Module to run workflows.

Human Task Service Module is obsolete

Effective in version 10.0, the Workflow Orchestration Service Module runs all tasks in a workflow.

Previously, the Workflow Service Module ran all workflow tasks except Human tasks. The Human Task
Service Module ran any Human task in a workflow.

Workflow database replaces the Model repository and Human task database as workflow metadata store

Effective in version 10.0, a single database stores all run-time metadata for workflows, including Human
task instance metadata. Select the workflow database connection on the Data Integration Service.

Previously, you selected a database to store Human task metadata on the Data Integration Service. The
Model repository stored all other run-time metadata for workflows.

Application Services 231


For more information about workflow and Human task configuration, see the "Data Integration Service"
chapter and the "Analyst Service" chapter in the Informatica 10.0 Application Service Guide.

Model Repository Service


This section describes changes to Model Repository Service features in version 10.0.

Repository Object Locks and Versions


Effective in version 10.0, if you try to edit an object that another user has locked, you receive a notification
that the object is locked by another user. You can choose to review the object in read-only mode, or you can
save the object with another name.

Previously, more than one user was allowed to open and edit an object. Only the last user who tried to save
the object received a notification that the object had been changed by another user.

If the Model repository is integrated with a version control system, you must check out an object before you
edit it.

For more information, see the "Model Repository" chapter in the Informatica 10.0 Developer Tool Guide.

Model Repository Paths


Effective in version 10.0, use the forward slash (/) when you specify a path in the Model repository. For
example, use the following path to specify a folder:
ModelRepository_name/Project_name/Folder_name
Previously, you could use other characters as the divider character between path elements. For example, in
some instances, a colon character followed the Model repository name.

For more information, see the "Model Repository" chapter in the Informatica 10.0 Developer Tool Guide.

SAP BW Service
This section describes changes to the SAP BW Service in version 10.0.

SAP BW Service for PowerCenter


Effective in version 10.0, the user interface option that you use in the Administrator tool to create an SAP BW
Service for PowerCenter has changed.

To create an SAP BW Service for PowerCenter, log in to Informatica Administrator. In the Domain Navigator,
right-click the domain, and click Actions > New > PowerCenter SAP BW Service.

232 Chapter 18: Changes (10.0)


The following image shows the user interface option that you must use in the Administrator tool to create an
SAP BW Service for PowerCenter.

Previously, you clicked Actions > New > SAP BW Service to create an SAP BW Service for PowerCenter.

Note: Effective in version 10.0, the SAP BW Service option is reserved for creating an SAP BW Service for the
Developer tool.

For more information, see the "SAP BW Service" chapter in the Informatica 10.0 Application Services Guide.

Application Services 233


Big Data
This section describes changes to big data features.

Hive Environment
Effective in version 10.0, the Hive environment no longer appears as a run-time or validation environment in
the Developer tool user interface. The Hive environment is changed to the Hive engine that uses Hadoop
technology for processing batch data such as MapReduce or Tez.

For more information, see the Informatica 10.0 Big Data Edition User Guide.

JCE Policy File Installation


Effective in version 10.0, Informatica Big Data Management ships the JCE policy file and installs it when you
run the installer.

Previously, you had to download and manually install the JCE policy file for AES encryption.

Kerberos Authentication
Effective in version 10.0, a Hadoop cluster cannot use only an MIT key distribution center (KDC) for Kerberos
authentication. Hadoop clusters can use a Microsoft Active Directory KDC or an MIT KDC connected to Active
directory with a one-way cross realm trust.

Business Glossary
This section describes changes to Business Glossary in version 10.0.

Relationship View
Effective in version 10.0, the relationship view has the following changes:

Highlight Asset Occurrences


When you left-click an asset, the Analyst tool highlights the occurrences of the asset. Previously, you had to
right-click the asset to highlight the occurrences of the asset.

Display Asset Details


When you hover the mouse over the asset name, the Analyst tool displays the asset details. Previously you
had to click the asset name for the Analyst tool to display the asset details.

For more information, see the "Finding Glossary Content" chapter in the Informatica 10.0 Business Glossary
Guide.

Asset Phase
Effective in version 10.0, the asset phase has the following changes:

Pending Publish Phase


When you export the assets and not the associated business initiative, the Analyst tool changes the phase of
the assets from Pending Publish to Published in the export file.

234 Chapter 18: Changes (10.0)


In Review Phase
You cannot modify assets that are in the In Review phase.

For more information, see the Informatica 10.0 Business Glossary Guide.

Library Workspace
Effective in version 10.0, the Library workspace has the following changes:

Sort Assets
When you view the assets by asset type you can sort Glossary assets by status and phase in the Library
workspace. Previously, you could not sort by the status and phase of the asset.

Find Option
When you look up assets by glossary, the option to enter search strings in the filter panel is no longer
available. Previously, you could search for assets when you look up assets by glossary.

Default Asset List


When you view the assets by asset type or by glossary, the Analyst tool applies filters by default to hide
inactive and rejected assets. Previously, the Analyst tool did not filter the inactive and rejected assets by
default.

For more information, see the Informatica 10.0 Business Glossary Guide.

Import and Export


Effective in version 10.0, you can import and export Glossary templates independently of Glossary assets.
Previously, the Analyst tool did not have unique menu options to import or export Glossary templates.

When you export a glossary, you now have an option to include attachments and audit history. The Analyst
tool generates a .zip file when you export the audit history or attachments along with Glossary assets.

For more information, see the "Glossary Administration" chapter in the Informatica 10.0 Business Glossary
Guide.

Command Line Programs


This section describes changes to commands in version 10.0.

infacmd isp Obsolete Commands


The following table describes commands that are obsolete effective in version 10.0.

Command Description

purgeMonitoringData Purges monitoring data from the Model repository.

Command Line Programs 235


Domain
This section describes changes to the domain in version 10.0.

Logs
Effective in version 10.0, the default location for system logs is <Informatica installation directory>/
logs/<node name>/.

The domain stores application services logs and system logs in the default location. You can change the
default directory path for logs with the System Log Directory option. You can use this option with any of the
following commands:

• infasetup DefineDomain
• infasetup DefineGatewayNode
• infasetup DefineWorkerNode
• infasetup UpdateGatewayNode
• infasetup UpdateWorkerNode

Previously, the domain stored application services logs and system logs in different locations. The default
directory for system logs was <Informatica installation directory>/tomcat/logs/.

For more information, see the "Log Management" chapter in the Informatica 10.0 Administrator Guide.

Log Format
Effective in version 10.0, all logs consistently contain the following information by default:

• Thread name.
• Timestamp, in milliseconds.

Previously, this information was not consistent in logs. For example, some logs did not contain timestamp
information, and of those that did, the timestamp format was not consistent.

For more information, see the "Log Management" chapter in the Informatica 10.0 Administrator Guide.

Job Log Events


When a Mapping task in a workflow starts a DTM instance to run a mapping, the DTM generates log events
for the mapping. The DTM stores the log files in a folder named mappingtask in the log directory that you
specify for the Data Integration Service process.

Previously the DTM stored the log files in a folder named builtinhandlers.

Informatica Administrator
This section describes changes to the Administrator tool in version 10.0.

Domain tab
Effective in version 10.0, the Domain tab is renamed the Manage tab.

The Manage tab has the following changes:

236 Chapter 18: Changes (10.0)


Views on the Manage tab

The Manage tab includes the Domain and Schedules views. Use the Domain view to view and manage
the status and resource consumption of the domain. Use the Schedules view to create and manage
reusable schedules for deployed mappings and workflows.

The following image shows the Domain view on the Manage tab:

1. Domain Actions menu


2. Contents panel
3. Object Actions menu
4. Service State Summary
5. Memory usage indicator
6. CPU usage indicator

Dependency graph

The dependency graph is moved from the Services and Nodes view to the Domain view. To access the
dependency graph, click the Actions menu for the domain, a service, or a node, and then choose View
Dependencies
Global Settings

Global Settings are moved from the Monitor tab, formerly Monitoring tab, to the Services and Nodes
view. The Global Settings are renamed Monitoring Configuration and are a view in the Services and
Nodes view.

Overview views

The Overview views for the domain and folders in the Services and Nodes view are removed. They are
replaced by the Domain view on the Manage tab.

For more information, see the Informatica 10.0 Administrator Guide.

Informatica Administrator 237


Monitoring
Effective in version 10.0, monitoring in the Administrator tool has the following changes:

Global Settings
Global Settings have the following changes:

• Global Settings are moved from the Monitor tab Actions menu to the Manage tab. Configure global
settings on the Monitoring Configuration view on the Services and Nodes view.
• The Number of Days to Preserve Historical Data option is renamed Preserve Summary Historical Data.
Minimum is 0. Maximum is 366. Default is 180.
• The Date Time Field option is renamed Show Milliseconds in Date Time Field.

Jobs
Jobs that users deploy from the Developer and Analyst tools are called ad hoc jobs. Ad hoc jobs include
previews, mappings, reference tables, enterprise discovery profiles, profiles, and scorecards. Previously, ad
hoc jobs were called jobs.

Navigation
The Monitoring tab is renamed the Monitor tab. Object monitoring is moved to the Execution Statistics view.

Preferences
Preferences in the Monitor tab Actions menu is renamed Report and Statistic Settings.

For more information, see the "Monitoring" chapter in the Informatica 10.0 Administrator Guide.

Informatica Analyst
This section describes changes to the Analyst tool in version 10.0.

Profiles
Effective in version 10.0, profiles in the Analyst tool have the following changes:

Column Profile
Effective in version 10.0, you can create a column profile with the Specify General Properties, Select Source,
Specify Settings, and Specify Rules and Filters steps in the profile wizard.

Previously, you created a column profile with the Step 1 of 6 through Step 6 of 6 steps in the profile wizard.

For more information about column profile, see the "Column Profiles in Informatica Analyst" chapter in the
Informatica 10.0 Data Discovery Guide.

Column Profile Results


Effective in version 10.0, you can view all the columns and rules in a profile in the summary view, and view
the properties of a column or rule in detail in the detailed view.

Previously, the profile results were displayed in Column Profiling, Properties, and Data Preview views.

For more information about column profile results, see the "Column Profile Results in Informatica Analyst"
chapter in the Informatica 10.0 Data Discovery Guide.

238 Chapter 18: Changes (10.0)


Edit a Column Profile
Effective in version 10.0, you can edit a column profile through the profile wizard.

Previously, you could click Actions > Edit to select and edit one of the options.

For more information about column profile, see the "Column Profiles in Informatica Analyst" chapter in the
Informatica 10.0 Data Discovery Guide.

Discovery Workspace
Effective in version 10.0, you can click Discovery workspace > Profile, and choose to create a single source
profile or enterprise discovery profile in the profile wizard.

Previously, you had to click Discovery workspace > Data Object Profile to create a profile, or click Discovery
workspace > Enterprise Discovery Profile to create an enterprise discovery profile.

For more information about column profile, see the "Column Profiles in Informatica Analyst" chapter in the
Informatica 10.0 Data Discovery Guide.

New Option
Effective in version 10.0, you can click New > Profile in the header area, and choose to create a single source
profile or enterprise discovery profile in the profile wizard.

Previously, you had to click New > Data Object Profile to create a profile, or click New > Enterprise Discovery
Profile to create an enterprise discovery profile.

For more information about column profile, see the "Column Profiles in Informatica Analyst" chapter in the
Informatica 10.0 Data Discovery Guide.

Create a Rule
Effective in version 10.0, you can create, add, or delete rules for a profile in the profile wizard.

Previously, you had to click Actions > Edit > Column Profiling Rules to add, delete, or create rules for the
profile.

For more information about rules, see the "Rules in Informatica Analyst" chapter in the Informatica 10.0 Data
Discovery Guide.

Create a Column Profile from a Data Object in Library Workspace


Effective in version 10.0, you can right-click on the data object in the Library workspace and create a column
profile.

Previously, this option was not available.

For more information about column profiles, see the "Column Profiles in Informatica Analyst" chapter in the
Informatica 10.0 Data Discovery Guide.

Filters
Effective in version 10.0, all the filters that you create for a profile are applicable to all the columns and data
domains in the profile and can be reused in the scorecard that you create on the profile.

Previously, you could create filters for the profile.

For more information about filters, see the "Filters in Informatica Analyst" chapter in the Informatica 10.0
Data Discovery Guide.

Sampling Options
Effective in version 10.0, the sampling option is applicable to both column profile and data domain discovery.
Previously, you could select different sampling options for the column profile and data domain discovery.

Informatica Analyst 239


For more information about filters, see the "Filters in Informatica Analyst" chapter in the Informatica 10.0
Data Discovery Guide.

Scorecards
This section describes changes to scorecards in the Analyst tool.

Notifications
Effective in version 10.0, scorecards send notifications using the email server configuration in the domain
SMTP Configuration properties.

Previously, scorecards used the email server configuration in the Data Integration Service properties.

Scorecard URL
Effective in version 10.0, when you add a scorecard URL to the source code of external applications or web
portals and access the URL, you need to log in to Informatica Analyst to view the scorecard due to security
reasons.

Previously, scorecard URL for external applications did not prompt for a login access.

Informatica Developer
This section describes changes to the Developer tool in version 10.0.

Application Deployment Changes


This section describes changes to application deployment in version 10.0.

Retain State Information Check Box


Effective in Informatica 10.0, when you redeploy an application, the “Retain state information” check box
allows you to choose to retain the current state of run-time objects that are part of the deployed application.
The state refers to mapping properties and the properties of run-time objects such as Sequence Generator
Transformations.

Previously, the Deploy dialog box gave you a choice of “Update” or “Replace.” The “Retain state information”
check box replaces the “Update” check box, and is selected by default.

If you select "Retain state information," you retain run-time settings and properties in the deployed
application. If you clear “Retain state information,” you discard the state of these settings and properties in
the deployed application.

Flat File Data Objects


Effective in version 10.0, you configure all of the format and run-time properties for a flat file data object in
the Advanced view. The Advanced view contains property sections that you can collapse and expand. The
column format sections that display depend on whether you configure a delimited or fixed-width column
format.

240 Chapter 18: Changes (10.0)


The following image shows the property sections in the Advanced view:

Previously, you configured the format and run-time properties for a flat file data object in the Read and Write
views. In the Read view, you selected the source transformation to configure format properties. You selected
the Output transformation to configure run-time properties. In the Write view, you selected the Input
transformation to configure run-time properties. You selected the target transformation to configure format
properties.

Microsoft SQL Server Changes


Effective in Informatica 10.0, Microsoft SQL Server contains the following changes:

• You can use the ODBC connection type to connect to Microsoft SQL Server.
• You can upgrade your existing connections by using the pmrep and infacmd commands. When you run the
upgrade command, all the existing connections are upgraded.
• The existing Microsoft SQL Server connection is deprecated and support will be dropped in the next major
release. You can run the existing mappings without manual updates. If you are using SSL connections,
you must select the provider type as ODBC in the connection, and configure SSL in the DSN.

Logical Data Object Editing


This section describes changes to the ways you edit logical data objects in the Developer tool in version 10.0.

Logical Data Object and Logical Data Object Model Editors


Effective in Informatica 10.0, you edit logical data objects and logical data object models in separate editors.

Previously, you used the same editor to edit logical data objects and logical data object models.

For more information, see "Logical View of Data" chapter in the Informatica 10.0 Developer Tool Guide.

Logical Data Object Mappings


Effective in Informatica 10.0, you create logical data object mappings from the logical data object editor.
Click the Add button to add a read mapping or a write mapping for the logical data object.

Previously, you clicked File > New to create logical data object mappings.

Informatica Developer 241


For more information, see "Logical View of Data" chapter in the Informatica 10.0 Developer Tool Guide.

Pushdown Optimization for ODBC Sources and Targets


Effective in version 10.0, Informatica dropped support for pushdown optimization to ODBC sources and
targets that use a provider type of "Other." You must use a provider type that is database-specific.

Mappings
This section describes changes to mappings in version 10.0.

Parameter Files
Effective in version 10.0, the parameter file format is changed. The parameter file no longer contains
transformation parameters.

You can run mappings and workflows with the parameter files from previous versions. When you run a
mapping or workflow with the previous version parameter file, the Data Integration Service converts the
parameter file to the Informatica 10.0 version.

When you create a parameter file with the infacmd listMappingParams command, the Data Integration
Service creates a mapping parameter file without transformation parameters. The infacmd
listWorkflowParams command creates a workflow parameter file without transformation parameters.

In previous versions, when you created parameter files, the parameter files contained transformation
parameters.

For more information about parameter files, see the Mapping Parameters chapter of the Informatica
Developer Mapping Guide.

Partitioned Mappings
This section describes changes to partitioned mappings in version 10.0.

Parallelism Value Calculations


Effective in version 10.0, the Data Integration Service can create a different number of threads for each
mapping pipeline stage. The service determines the optimal number of threads for each pipeline stage. The
number of threads created for a single pipeline stage cannot exceed the maximum parallelism value.

Previously, the Data Integration Service calculated a single actual parallelism value and used that same value
for each mapping pipeline stage. The service calculated the actual parallelism value based on the maximum
parallelism values and on the maximum number of partitions for all flat file, IBM DB2 for LUW, or Oracle
sources ready by a mapping.

Partitioned Decision and SQL Transformations


Effective in version 10.0, you can disable partitioning for a Decision or SQL transformation by clearing the
Partitionable advanced property for the transformation. The Data Integration Service uses one thread to
process the transformation, and can use multiple threads to process the remaining mapping pipeline stages.
You might want to disable partitioning for these transformations because these transformations might not
return the same result for each mapping run when they are processed with multiple threads.

242 Chapter 18: Changes (10.0)


Previously, the Decision transformation did not support partitioning. When a mapping contained a Decision
transformation, the Data Integration Service did not create partitions for the entire mapping. The SQL
transformation did support partitioning. You disabled partitioning for the entire mapping when this
transformation needed to be processed with one thread.

Partitioned Targets
Effective in version 10.0, if a mapping establishes order with a sorted relational source or a Sorter
transformation, the Data Integration Service can use multiple threads to run the mapping. To maintain order
in a partitioned mapping, you must specify that targets maintain the row order in the advanced properties for
the Write transformation. When you configure Write transformations to maintain row order, the Data
Integration Service uses a single thread to write to the target.

Previously, if a mapping included a sorted relational source, the Data Integration Service used one thread to
process each mapping pipeline stage. If a mapping included a Sorter transformation, the Data Integration
Service used one thread to process the Sorter transformation and all downstream mapping pipeline stages.

If you upgrade from an earlier version, all existing Write transformations are configured to maintain row
order. The Data Integration Service uses a single thread to write to the target to ensure that any order
established in the mapping is maintained. If an upgraded mapping does not establish an order, you can clear
the Maintain Row Order property in the advanced properties for a Write transformation so that the Data
Integration Service can use multiple threads to write to the target.

Partitioned Java Transformations


Effective in version 10.0, you can disable partitioning for a Java transformation by clearing the Partitionable
advanced property for the transformation. The Data Integration Service uses one thread to process the
transformation, and can use multiple threads to process the remaining mapping pipeline stages. You might
want to disable partitioning for a Java transformation when the Java code requires that the transformation
be processed with one thread.

You can configure a Java transformation to maintain the row order of the input data by selecting the
Stateless advanced property for the transformation.

Previously, you cleared the stateless property if the Java transformation needed to be processed with one
thread. When the stateless property was cleared, the Data Integration Service did not create partitions for the
entire mapping.

Transformations that Do Not Support Partitioning


Effective in version 10.0, when a mapping contains a transformation that does not support partitioning, the
Data Integration Service uses one thread to process the transformation. The service can use multiple threads
to process the remaining mapping pipeline stages.

Previously, when a mapping contained a transformation that did not support partitioning, the Data Integration
Service did not create partitions for the mapping. The service used one thread to process each mapping
pipeline stage.

For more information about partitioned mappings, see the "Partitioned Mappings" chapter in the Informatica
10.0 Developer Mapping Guide.

Pushdown Optimization
Effective in version 10.0, pushdown optimization is removed from the mapping optimizer level. To configure
a mapping for pushdown optimization you must select a pushdown type in the mapping run-time properties.

Previously, the Data Integration Service applied pushdown optimization by default with the normal or full
optimizer level.

For more information, see the Informatica 10.0 Developer Mapping Guide.

Mappings 243
Run-time Properties
Effective in version 10.0, configure Validation Environments on the Run-time tab. The mapping Properties
view no longer contains an Advanced properties tab.

Previously, you configured the Validation Environments property on the Advanced properties tab.

For more information, see the Informatica 10.0 Developer Mapping Guide.

Metadata Manager
This section describes changes to Metadata Manager in version 10.0.

ODBC Connectivity for Informix Resources


Effective in version 10.0, when you load an Informix resource, the PowerCenter Integration Service uses
ODBC to connect to the Informix database. Therefore, you can create and load Informix resources whether
the Informatica domain runs on Windows or UNIX. To connect to Informix, you must configure an ODBC
connection to the Informix database.

Previously, the PowerCenter Integration Service used native connectivity to connect to the Informix database.
You could create and load Informix resources only when the Informatica domain ran on 32-bit Windows.

For more information about configuring Informix resources, see the "Database Management Resources"
chapter in the Informatica 10.0 Metadata Manager Administrator Guide.

ODBC Connectivity for Microsoft SQL Server Resources


Effective in version 10.0, when you load a Microsoft SQL Server resource, the PowerCenter Integration
Service uses ODBC to connect to the database. The PowerCenter Integration Service retrieves the server
name and the database name from the connect string and creates a data source using the installed ODBC
driver.

Therefore, you no longer need to perform the following tasks when you configure a Microsoft SQL Server
resource:

• On Windows, you do not need to install the Microsoft SQL Server Native Client.
• On UNIX, you do not need to create a data source for the Microsoft SQL Server database in the [Link]
file.
Note: If you previously created a data source in the [Link] file, you can still use it by entering the data
source name as the connect string.
• You do not need to set the ODBC Connection Mode property for the Metadata Manager Service in the
Administrator tool. This property is removed because the connection mode for Microsoft SQL Server is
always ODBC.
Previously, the PowerCenter Integration Service used native connectivity on Windows and ODBC connectivity
on UNIX.

For more information about configuring Microsoft SQL Server resources, see the "Database Management
Resources" chapter in the Informatica 10.0 Metadata Manager Administrator Guide.

244 Chapter 18: Changes (10.0)


Impact Summary for PowerCenter Objects
Effective in version 10.0, the impact summary displays different information when you view metadata details
for some PowerCenter objects.

The impact summary has the following behavior changes:

• When you view metadata details for a session task instance, Metadata Manager lists the mappings that
the session task instance runs as related catalog objects but not in the impact summary.
Previously, Metadata Manager listed the mappings as related catalog objects and in the upstream and
downstream impact summary.
• When you view metadata details for a mapplet instance that contains a source definition, Metadata
Manager does not list the parent mapping in the impact summary.
Previously, Metadata Manager listed the parent mapping in the downstream impact summary.
• When you view metadata details for a mapplet instance that does not contain a source, Metadata
Manager does not display an impact summary.
Previously, Metadata Manager displayed an impact summary for mapplet instances that do not contain a
source.
• When you view metadata details for an Input or Output transformation instance in a mapplet, Metadata
Manager does not display an impact summary.
Previously, Metadata Manager displayed an impact summary for Input and Output transformation
instances in a mapplet.
• When you view metadata details for a Source Qualifier instance in a mapplet that contains a source
definition, Metadata Manager does not display the parent mapping in the impact summary.
Previously, Metadata Manager displayed the parent mapping in the impact summary.
For more information about the impact summary, see the "Viewing Metadata" chapter in the Informatica 10.0
Metadata Manager User Guide.

Maximum Concurrent Resource Loads


Effective in version 10.0, the maximum value for the Max Concurrent Resource Load property for the
Metadata Manager Service is 10. Therefore, you can load up to 10 resources simultaneously.

Previously, the maximum value for the property was 5.

For more information about the Max Concurrent Resource Load property, see the "Metadata Manager
Service" chapter in the Informatica 10.0 Application Service Guide.

Search
Effective in version 10.0, Metadata Manager displays the advanced search criteria and the search results in
the Search Results panel at the bottom of the Browse tab. The Search Results panel allows you to view the
metadata catalog, business glossaries, shortcuts, or data lineage diagram while you perform a search. You
can resize, minimize, and restore the Search Results panel.

Previously, Metadata Manager displayed the advanced search criteria and the search results on a separate
tab.

For more information about searches, see the "Searching Metadata" chapter in the Informatica 10.0 Metadata
Manager User Guide.

Metadata Manager 245


Metadata Manager Log File Changes
Effective in version 10.0, the location for the Metadata Manager log files is updated to store all the log files in
one directory.

The following Metadata Manager log files are stored in the directory <Informatica installation
directory>\logs\<node name>\services\MetadataManagerService\<Metadata Manager service name>:

• Load details log


• mm_agent.log
• [Link]
• [Link]
• [Link]
Note: [Link] is stored in the new log files directory when the Metadata Manager Service calls
mmRepoCmd. If you run mmRepoCmd from the command line, the utility creates the log file in the
directory where mmRepoCmd is located.
In the previous versions of Metadata Manager, these log files were located in different directories. After you
upgrade Metadata Manager from a previous version to version 10.0, the existing log files do not get moved to
the new location.

For more information about Metadata Manager log files, see the Informatica 10.0 Metadata Manager
Administrator Guide.

Business Glossary Model


Effective in version 10.0, you cannot export or import the Business Glossary model. Additionally, you cannot
customize the Business Glossary model by configuring attributes or relationships.

To export and import business glossary assets and templates or to customize business glossaries, use the
Analyst tool.

Profiling
Effective in version 10.0, Metadata Manager does not extract profiling information from relational metadata
sources.

Profiling is available in the Analyst tool and the Developer tool.

PowerCenter
This section describes changes to PowerCenter in version 10.0.

Informix Native Connections


Effective in version 10.0, the Informix native connection is obsolete. Informatica dropped support for Informix
native connections.

Create an ODBC connection to connect to an Informix database.

For more information, see the Informatica 10.0 Application Services Guide.

246 Chapter 18: Changes (10.0)


pmrep Changes
This section describes the changes to pmrep commands.

PurgeVersion command
• Effective in version 10.0, you can use pmrep purgeVersion -c with or without the -p option.
When you use the -c option with the -p option, the output lists the object versions that purge, then lists
which object versions are contained in deployment groups.
When you use the -c option without the -p option, the command does not purge versions that are part of
deployment groups.
Previously, when you used the -c option, the -p option was required.
• Effective in version 10.0, if an object version is a member of a deployment group, the version will not
purge.
When you use pmrep purgeVersion with the -k option, the results display all versions that do not purge,
and the reason the version does not purge.
When a version will not be purged because it is in a deployment group, the reason lists only the first
deployment group that causes the object not to purge.
Previously, the inclusion of a version in a deployment group did not affect whether or not it would be
purged.
For more information, see the Informatica 10.0 Command Reference.

PowerCenter Data Profiling


Effective in version 10.0, PowerCenter Data Profiling is obsolete.

To perform profiling and discovery, use Informatica Analyst or Informatica Developer.

For more information, see the Informatica 10.0 Data Discovery Guide..

PowerExchange Adapters
This section describes changes to PowerExchange adapters in version 10.0.

PowerExchange Adapters for Informatica


This section describes changes to Informatica adapters in version 10.0.

PowerExchange for SAP NetWeaver


Effective in version 10.0, PowerExchange for SAP NetWeaver contains the following changes:

SAP Connections

The SAP connections that you created in versions earlier than 10.0 are deprecated. The deprecated
connection category is named as SAP (Deprecated) under Enterprise Application.

Informatica will drop support for the deprecated connections in a future release. You can run mappings
with the deprecated connections and also create a new deprecated connection. However, Informatica
recommends that you create a new SAP connection by using the SAP category under Enterprise
Application.

PowerExchange Adapters 247


The following image shows the deprecated SAP connection category and the new SAP connection
category that you must use:

SAP Data Objects

The SAP data objects that you created in versions earlier than 10.0 are deprecated. The deprecated data
object type is named as SAP Data Object (Deprecated).

Informatica will drop support for the deprecated data objects in a future release. You can run mappings
with the existing data objects and also create a new deprecated data object. However, Informatica
recommends that you create a new data object of type SAP Table Data Object to read data from SAP
tables.

248 Chapter 18: Changes (10.0)


The following image shows the deprecated SAP data object and the new SAP Table data object that you
must use:

For more information, see the Informatica 10.0 PowerExchange for SAP NetWeaver User Guide.

Reference Data
This section describes changes to reference data operations in version 10.0.

Classifier Models
Effective in version 10.0, you view and manage the data in a classifier model in a single view in the Developer
tool.

Previously, you toggled between two views in the Developer tool to see all of the options on a classifier
model.

For more information, see the "Classifier Models" chapter of the Informatica 10.0 Reference Data Guide.

Reference Data 249


Probabilistic Models
Effective in version 10.0, Informatica uses version 3.4 of the natural language processing engine from the
Stanford Natural Language Processing Group.

Previously, Informatica used version 1.2.6 of the engine.

For more information, see the "Reference Data in the Developer Tool" chapter of the Informatica 10.0
Reference Data Guide.

Rule Specifications
This section describes changes in rule specifications in version 10.0.

• Effective in version 10.0, you create inputs and update the input properties in the Manage Global Inputs
dialog box.
Previously, you created and updated an input in the rule set that read the input.
• Effective in version 10.0, a rule set uses text indicators to describe the sequence in which data moves
through the rule statements.
Previously, a rule set used numbers to indicate the sequence.
• Effective in version 10.0, the Design workspace in the Analyst tool uses the term "generate" to identify to
the operation that creates a mapplet rule from a rule specification.
Previously, the Design workspace used the term "compile" to identify the operation.
• Effective in version 10.0, you can validate and generate a rule specification that contains unused inputs.
Previously, a rule specification that contained unused inputs was not valid.
• Effective in version 10.0, you can create and begin work on a rule specification in a single operation.
Previously, you created and opened a rule specification in separate operations.
For more information, see the Informatica 10.0 Rule Specification Guide.

Security
This section describes changes to security in Informatica version 10.0

Authentication
This section describes changes to authentication for the Informatica domain.

Effective in Informatica 10.0, single sign-on for an Informatica domain without Kerberos authentication has
the following changes:
Single sign-on with the Developer tool

When you open a web application client from the Developer Tool, you must log in to the web application.

Previously, you did not have to enter log in information for the web application.

250 Chapter 18: Changes (10.0)


Logging out from web application clients

You must log out from each web application client separately if you use the Administrator tool to open a
web application client. For example, if you use the Administrator tool to open the Analyst tool, you must
log out of the Administrator tool and the Analyst tool separately.

Sources and Targets


This section describes changes to sources and targets in version 10.0.

Sources and Targets in PowerCenter


Effective in version 10.0, the Data Transformation source and target are no longer supported. Instead of the
Data Transformation source and target, you can use a flat file source and flat file target that point to the
relevant file.

For more information, see the Informatica PowerCenter 10.0 Designer Guide.

Transformations
This section describes changed transformation behavior in version 10.0.

Informatica Transformations
This section describes the changes to the Informatica transformations in version 10.0.

Address Validator Transformation


Effective in Informatica 10.0, you cannot use a country name as a parameter value on the Default Country
advanced property. When you define a parameter to specify the default country, enter the three-character ISO
country code as the parameter value.

Previously, you entered the country name or the three-character ISO country code as the parameter value.

Aggregator Transformation
Effective in version 10.0, you define the group by ports on the Group By tab of the Aggregator transformation
Properties view.

You can parameterize the ports you want to include in the aggregator group with a port list parameter. You
can include dynamic ports in the Aggregator transformation.

Previously, you selected group by ports on the Ports tab of the transformation Properties view.

Sources and Targets 251


The following image shows the Group By tab in the Aggregator transformation:

For more information about the Aggregator transformation, see the Aggregator Transformation chapter in the
Informatica 10.0 Developer Transformation Guide.

Data Processor Transformation


This section describes the changes to the Data Processor transformation.

Additional Output Ports for Relational to Hierarchical Transformation


Effective in version 10.0, a Data Processor transformation with relational input and hierarchical output can
have additional output ports. For example, a transformation can work with services that produce validation
reports in addition to the main output. Previously, additional output ports were not available.

Multiple JSON Input


Effective in version 10.0, you can use a wizard to create a Data Processor transformation in the Developer
with an input file that contains multiple JSON messages. The transformation can process up to 1 M of JSON
messages. Previously the transformation processed a single JSON message.

Pass-Through Ports for Relational to Hierarchical Transformation


Effective in version 10.0, a Data Processor transformation with relational input and hierarchical output can
use pass-through ports. You add pass-through ports to the root group of the relational structure. Previously,
pass-through ports were not available.

Match Transformation
Effective in Informatica 10.0, the Match transformation displays the following changes in behavior:

• Effective in version 10.0, the Match transformation generates unique cluster ID values across all threads
in the same process.
Previously, the Match transformation generated the cluster ID values independently on each thread.
• Effective in version 10.0, you select the following option to connect the Match transformation to a
persistent store of identity index data:
Identity Match with Persistent Record ID

252 Chapter 18: Changes (10.0)


Previously, you selected the Persist Record IDs option.
• Effective in version 10.0, you can select the Clusters - Best Match output option in all types of identity
match analysis.
Previously, you selected the Clusters - Best Match option in single-source identity match analysis.

Rank Transformation
Effective in version 10.0, you define the rank port and the group by ports on the Rank tab of the
transformation Properties view.

You can parameterize the rank port with a port parameter. You can parameterize the group by ports with a
port list parameter. You can include dynamic ports in the Rank transformation.

Previously, you selected the rank port and the group by ports on the Ports tab of the transformation
Properties view.

The following image shows the Rank tab:

For more information about the Rank transformation, see the Informatica 10.0 Developer Transformation
Guide.

Sorter Transformation
This section describes changes to the Sorter transformation in version 10.0.

Cache Size
Effective in version 10.0, the Sorter transformation pages fewer cache files to the disk which improves
performance. If the configured cache size is too small for the Sorter transformation, the Data Integration
Service processes some of the data in memory and only stores overflow data in cache files.

Previously, if the cache size was too small, the Data Integration Service paged all the cache files to the disk.

Sort Keys and Distinct Rows


Effective in version 10.0, you define the sort keys on the Sort tab of the Sorter transformation Properties
view. You can also choose to create distinct rows on the Sort tab.

You can parameterize the ports you want to include in the sort key with a sort list parameter. You can include
dynamic ports in the Sorter transformation.

Previously, you selected ports for sort keys on the Ports tab of the transformation Properties view. You
selected to create distinct rows on the Advanced tab.

Transformations 253
The following image shows the Sort tab:

For more information, see the Informatica 10.0 Developer Transformation Guide.

Workflows
This section describes changed workflow behavior in version 10.0.

Informatica Workflows
This section describes the changes to Informatica workflow behavior in version 10.0.

Command Tasks
Effective in version 10.0, a Command task does not fail when the working directory that the task specifies is
not valid.

Previously, a Command task failed when the working directory was not valid.

For more information, see the Informatica 10.0 Developer Workflow Guide.

Data Integration Service Options


Effective in version 10.0, you configure a single Data Integration Service to run workflows.

Previously, you might configure different Data Integration Services to run Human tasks and to run the other
stages in a workflow.

Effective in version 10.0, the Workflow Orchestration Service module on the Data Integration Service runs all
stages in a workflow.

254 Chapter 18: Changes (10.0)


Previously, the Workflow Service module ran all stages in a workflow with the exception of a Human task.
The Human Task Service module on the Data Integration Service ran a Human task in a workflow. The
Workflow Orchestration Service module replaces the Workflow Service module and the Human Task Service
module in version 10.0.

Note: Complete all Human tasks that you run in an earlier version of Informatica before you upgrade to
version 10.0.

For more information, see the Informatica 10.0 Application Service Guide.

Human Tasks
Effective in version 10.0, a Human task does not stop a workflow when the exceptionLoadCount input value
on the task is less than 1. When the exceptionLoadCount input value is less than 1, the Human task
completes but generates no data for Analyst tool users.

Previously, a Human task stopped a workflow when the exceptionLoadCount input value was less than 1.

Effective in version 10.0, a Human task sends email notifications using the email server configuration in the
Email Service properties.

Previously, a Human task sent email notifications using the email server configuration in the Data Integration
Service properties.

Effective in version 10.0, you cannot move from one step to another in a Human task if you cancel the
workflow in the following scenario:

• The Human task is running.


• The Data Integration Service distributed all of the task instances that the Human task specifies.
Previously, when you canceled the workflow, you could complete all of the steps in the Human task.

For more information, see the Informatica 10.0 Developer Workflow Guide.

Mapping Tasks
Effective in version 10.0, the Data Integration Service creates a log file for each instance of a Mapping task
that runs in a workflow instance. If the Mapping task restarts following an interruption in an earlier workflow
run, the Data Integration Service creates a log file for the restarted task.

Previously, the Data Integration Service stored log data for all instances of a Mapping task that ran in a
workflow instance in a single file.

For more information, see the Informatica 10.0 Administrator Guide.

Notification Tasks
Effective in version 10.0, a Notification task sends email notifications using the email server configuration in
the Email Service properties.

Previously, a Notification task sent email notifications using the email server configuration in the Data
Integration Service properties.

For more information, see the Informatica 10.0 Developer Workflow Guide.

Run-Time Metadata
Effective in version 10.0, the Data Integration Service stores all run-time metadata for a workflow in a set of
tables in a single database. You select the database connection as a Workflow Orchestration Service
property on the Data Integration Service.

Previously, the Data Integration Service stored run-time metadata for a workflow in the Model repository and
stored any Human task metadata in the Human task database. The Human task database is obsolete in
version 10.0.

Workflows 255
Note: You must create the workflow database contents before you run a workflow. To create the contents,
use the Actions menu options for the Data Integration Service in the Administrator tool.

For more information, see the Informatica 10.0 Application Service Guide.

Workflow Monitoring
Effective in version 10.0, a workflow can enter a completed state if a Command task or a Mapping task in the
workflow sequence fails to complete.

For example, a workflow can continue to run to completion if a Mapping task fails in one of the following
scenarios:

• You enabled the workflow for recovery, and you configured the Mapping task with a skip recovery
strategy.
• You did not enable the workflow for recovery.
Previously, a workflow entered a failed state if a Command task or a Mapping task failed during the workflow
run.

For more information, see the Informatica 10.0 Administrator Guide and the Informatica 10.0 Developer
Workflow Guide.

Workflow Object Names


Effective in version 10.0, the following object names must use characters and symbols that conform to the
XML 1.0 specification:

• Workflow names
• Task names
• Gateway names
• Workflow application names
• Workflow variable names
• Workflow parameter names
The XML 1.0 specification excludes a small number of characters and symbols from the names. If any name
contains a character or symbol that the specification excludes, the workflow fails to run.

Previously, the XML 1.0 specification did not determine the range of valid characters and symbols in
workflow names and associated object names.

If you upgrade to version 10.0 or later, edit any workflow or associated object name that contains a character
or a symbol that the XML 1.0 specification does not support.

For more information, see the Informatica 10.1 Upgrading from Version 9.5.1 Guide and the Informatica 10.1
Upgrading from Version 9.6.1 Guide.

Workflow Recovery
Effective in version 10.0, the Data Integration Service does not impose a limit on the number of attempts to
recover a workflow. The Administrator tool does not display the number of times that you try to recover the
workflow.

Previously, you configured a maximum number of recovery attempts in the Developer tool. The monitoring
features of the Administrator tool displayed the number of times that you tried to recover the workflow.

Effective in version 10.0, an aborted workflow is not recoverable.

Previously, an aborted workflow was recoverable.

256 Chapter 18: Changes (10.0)


Effective in version 10.0, when you cancel a workflow, the currently running task might remain in a Running
state while the workflow enters a Canceled state. Because the task runs to completion, the workflow status
can change to Canceled while the task is still running.

Previously, when you canceled a workflow, the workflow entered a Canceled state when the currently running
task ended.

For more information, see the Informatica 10.0 Administrator Guide and the Informatica 10.0 Developer
Workflow Guide.

Workflows 257
Chapter 19

Release Tasks (10.0)


This chapter includes the following topic:

• Mappings, 258

Mappings
This section describes release tasks for Mappings in version 10.0.

Parameter Precision
Effective in version 10.0, the size of a default parameter value must be less than or equal to the precision
specified for the parameter. In previous versions, if the parameter default value was greater than the
precision size, the Data Integration Service truncated the parameter default value and the mapping ran
successfully.

After the upgrade to 10.0 is complete, you must verify that the size of each parameter default value is less
than or equal to the precision specified for the parameter. If the parameter default value is greater than the
precision, update the default value or change the precision. Redeploy the mapping.

In version 10.0, if the size of the parameter default value is greater than the parameter precision, a mapping
fails with the following error:

The parameter [my_parameter] should have a default value length less than or equal to the
precision.

258
Part V: Version 9.6.1
This part contains the following chapters:

• New Features, Changes, and Release Tasks (9.6.1 HotFix 4), 260
• New Features, Changes, and Release Tasks (9.6.1 HotFix 3), 270
• New Features, Changes, and Release Tasks (9.6.1 HotFix 2), 279
• New Features, Changes, and Release Tasks (9.6.1 HotFix 1), 296
• New Features (9.6.1), 309
• Changes (9.6.1), 328

259
Chapter 20

New Features, Changes, and


Release Tasks (9.6.1 HotFix 4)
This chapter includes the following topics:

• New Features (9.6.1 HotFix 4), 260


• Changes (9.6.1 HotFix 4), 266
• Release Tasks (9.6.1 HotFix 4), 268

New Features (9.6.1 HotFix 4)


This section describes new features in version 9.6.1 HotFix 4.

260
Command Line Programs
This section describes new commands in version 9.6.1 HotFix 4.

infacmd isp Commands


The following table describes a new infacmd isp command:

Command Description

ListDomainCiphers Displays one or more of the following cipher suite lists used by the Informatica domain or a
gateway node:
Black list

User-specified list of cipher suites that the Informatica domain blocks.


Default list

List of cipher suites that Informatica supports by default.

Effective list

The list of cipher suites that the Informatica domain uses after you configure it with the
infasetup updateDomainCiphers command. The effective list supports cipher suites in the
default list and white list but blocks cipher suites in the black list.

White list

User-specified list of cipher suites that the Informatica domain can use in addition to the
default list.
You can specify which lists that you want to display.

For more information, see the "infacmd isp Command Reference" chapter Informatica 9.6.1 HotFix 4
Command Reference.

New Features (9.6.1 HotFix 4) 261


infasetup Commands
The following table describes new infasetup commands:

Command Description

ListDomainCiphers Displays one or more of the following cipher suite lists used by the Informatica domain or a
gateway node uses:
Black list

User-specified list of cipher suites that the Informatica domain blocks.

Default list

List of cipher suites that Informatica supports by default.

Effective list

The list of cipher suites that the Informatica domain uses after you configure it with the
infasetup updateDomainCiphers command. The effective list supports cipher suites in the
default list and white list but blocks cipher suites in the black list.

White list

User-specified list of cipher suites that the Informatica domain can use.
You can specify which lists that you want to display.

updateDomainCiphers Updates the cipher suites that the Informatica domain can use with a new effective list.

The following table describes updated options for infasetup commands:

Command Description

- DefineDomain The commands contain the following new options:


- DefineGatewayNode - cipherWhiteList |-cwl
- DefineWorkerNode - cipherWhiteListFile |-cwlf
- UpdateGatewayNode - cipherBlackList |-cbl
- UpdateWorkerNode - cipherBlackListFile |-cblf
Use these options to configure cipher suites for an Informatica domain that uses secure
communication within the domain or secure connections to web application services.

For more information, see the "infasetup Command Reference" chapter in the Informatica 9.6.1 HotFix 4
Command Reference.

Connectivity
This section describes new connectivity features in version 9.6.1 HotFix 4.

Schema Names in IBM DB2 Connections


Effective in version 9.6.1 HotFix 4, when you use an IBM DB2 connection to import a table in the Developer
tool or the Analyst tool, you can specify one or more schema names from which you want to import the table.
Use the ischemaname attribute in the metadata connection string URL to specify the schema names. Use the
pipe (|) character to separate multiple schema names.

For example, enter the following syntax in the metadata connection string URL:

262 Chapter 20: New Features, Changes, and Release Tasks (9.6.1 HotFix 4)
jdbc:informati[Link]//<host name>:<port>;DatabaseName=<database
name>;ischemaname=<schema_name1>|<schema_name2>|<schema_name3>

For more information, see the Informatica 9.6.1 HotFix 4 Developer Tool Guide and Informatica 9.6.1 HotFix 4
Analyst Tool Guide.

Exception Management
This section describes new exception management features in version 9.6.1 HotFix 4.

Search and replace data values by data type

Effective in version 9.6.1 HotFix 4, you can configure the options in an exception task to search and
replace data values based on the data type. You can configure the options to search and replace data in
any column that contains date, string, or numeric data.

When you specify a data type, the Analyst tool searches for the value that you enter in any column that
uses the data type. You can find and replace any value that a string data column contains. You can
perform case-sensitive searches on string data. You can search for a partial match or a complete match
between the search value and the contents of a field in a string data column.

For more information, see the Exception Records chapter in the Informatica 9.6.1 HotFix 4 Exception
Management Guide.

Informatica Domain
This section describes new features to Informatica Domain.

Domain Reports
Effective in version 9.6.1 HotFix 4, the License Management Report includes the consumed cores property.
This property indicates the number of cores on the machine.

For more information about the License Management Report, see the "Domain Reports" chapter in the
Informatica 9.6.1 HotFix 4 Administrator Guide.

Informatica Transformations
This section describes new Informatica transformation features in version 9.6.1 HotFix 4.

Address Validator Transformation


This section describes the new Address Validator transformation features.

The Address Validator transformation contains additional address functionality for the following countries:

Ireland

Effective in version 9.6.1 HotFix 4, you can return the eircode for an address in Ireland. An eircode is a
seven-character code that uniquely identifies an Ireland address. The eircode system covers all
residences, public buildings, and business premises and includes apartment addresses and addresses in
rural townlands.

To return the eircode for an address, select a Postcode port or a Postcode Complete port.

France

Effective in version 9.6.1 HotFix 4, address validation uses the Hexaligne 3 repository of the National
Address Management Service to certify a France address to the SNA standard.

New Features (9.6.1 HotFix 4) 263


The Hexaligne 3 data set contains additional information on delivery point addresses, including sub-
building details such as building names and residence names.

Germany

Effective in version 9.6.1 HotFix 4, you can retrieve the three-digit street code part of the Frachtleitcode
or Freight Code as an enrichment to a valid Germany addresses. The street code identifies the street
within the address.

To retrieve the street code as an enrichment to verified Germany addresses, select the Street Code DE
port. Find the port in the DE Supplementary port group.

Informatica adds the Street Code DE port in version 9.6.1 HotFix 4.

South Korea

Effective in version 9.6.1 HotFix 4, you can verify older, lot-based addresses and addresses with older,
six-digit post codes in South Korea. You can verify and update addresses that use the current format, the
older format, and a combination of the current and older formats. A current South Korea address has a
street-based format and includes a five-digit post code. A non-current address has a lot-based format
and includes a six-digit post code.

To verify a South Korea address in an older format and to change the information to another format, use
the Address Identifier KR ports. You update the address information in two stages. First, run the address
validation mapping in batch or interactive mode and select the Address Identifier KR output port. Then,
run the address validation mapping in address code lookup mode and select the Address Identifier KR
input port. Find the Address Identifier KR input port in the Discrete port group. Find the Address Identifier
KR output port in the KR Supplementary port group.

To verify that the Address Validator transformation can read and write the address data, add the
Supplementary KR Status port to the transformation.

Informatica adds the Address Identifier KR ports, the Supplementary KR Status port, and the KR
Supplementary port group in version 9.6.1 HotFix 4.

Effective in version 9.6.1 HotFix 4, you can retrieve South Korea address data in the Hangul script and in
a Latin script.

United Kingdom

Effective in version 9.6.1 HotFix 4, you can retrieve delivery point type data and organization key data for
a United Kingdom address. The delivery point type is a single-character code that indicates whether the
address points to a residence, a small organization, or a large organization. The organization key is an
eight-digit code that the Royal Mail assigns to small organizations.

To add the delivery point type to a United Kingdom address, use the Delivery Point Type GB port. To add
the organization key to a United Kingdom address, use the Organization Key GB port. Find the ports in
the UK Supplementary port group. To verify that the Address Validator transformation can read and write
the data, add the Supplementary UK Status port to the transformation.

Informatica adds the Delivery Point Type GB port and the Organization Key GB port in version 9.6.1
HotFix 4.

For more information, see the Informatica 9.6.1 HotFix 4 Address Validator Port Reference.

Metadata Manager
This section describes new Metadata Manager features in version 9.6.1 HotFix 4.

264 Chapter 20: New Features, Changes, and Release Tasks (9.6.1 HotFix 4)
Application Properties
Effective in version 9.6.1 HotFix 4, you can configure new application properties in the Metadata Manager
[Link] file.

The following table describes new Metadata Manager application properties in [Link]:

Property Description

[Link] Maximum number of errors that the Metadata Manager Service can
encounter before the custom resource load fails.

[Link] Number of errors that the Metadata Manager Service writes to the in
memory cache and to the [Link] file in one batch when you load a custom
resource.

For more information about the [Link] file, see the "Metadata Manager Properties Files" appendix in
the Informatica 9.6.1 HotFix 4 Metadata Manager Administrator Guide.

Migrate Business Glossary Audit Trail History and Links to Technical


Metadata
Effective in version 9.6.1 HotFix 4, you can migrate audit trail history and links to technical metadata when
you export business glossaries. You can import the audit trail history and links in the Analyst tool.

For more information, see the Informatica 9.6.1 HotFix 4 Upgrading from Version 9.5.1 Guide.

PowerExchange Adapters
This section describes new PowerExchange adapter features in version 9.6.1 HotFix 4.

PowerExchange Adapters for PowerCenter


This section describes new PowerCenter adapter features in version 9.6.1 HotFix 4.

PowerExchange for Greenplum


Effective in version 9.6.1 HotFix 4, you can configure Kerberos authentication for native Greenplum
connections.

For more information, see the "Greenplum Sessions and Workflows" chapter in the Informatica 9.6.1 HotFix 4
PowerExchange for Greenplum User Guide for PowerCenter.

PowerExchange for Teradata Parallel Transporter API


Effective in version 9.6.1 HotFix 4, you can configure Kerberos authentication for native Teradata PT
connections.

For more information, see the "Teradata PT API Sessions and Workflows" chapter in the Informatica 9.6.1
HotFix 4 PowerExchange for Teradata Parallel Transporter API User Guide for PowerCenter.

New Features (9.6.1 HotFix 4) 265


Security
This section describes new security features in version 9.6.1 HotFix 4.

Custom Cipher Suites


Effective in version 9.6.1 HotFix 4, you can customize the cipher suites that the Informatica domain uses for
secure communication within the domain and secure connections to web application services. You can
create a whitelist and blacklist to enable or block specific ciphersuites.

The Informatica domain uses an effective list of cipher suites that uses the cipher suites in the default and
whitelists but blocks cipher suites in the blacklist.

For more information, see the "Domain Security" chapter in the Informatica 9.6.1 HotFix 4 Security Guide.

Changes (9.6.1 HotFix 4)


This section describes changes in version 9.6.1 HotFix 4.

Change to Support in Version 9.6.1 HotFix 4


Effective in version 9.6.1 HotFix 4, Informatica deferred support for Big Data Edition. Support will be
reinstated in a future release.

Application Services
This section describes changes to Application Services in version 9.6.1 HotFix 4.

Reporting and Dashboards Service (Deprecated)


Effective in version 9.6.1 HotFix 4, Informatica deprecated the Reporting and Dashboards Service.
Informatica will drop support for the Reporting and Dashboards Service in a future release.

If you upgrade to version 9.6.1 HotFix 4, you can continue to use the Reporting and Dashboards Service.
Informatica recommends that you begin using a third-party reporting tool before Informatica drops support.
You can use the recommended SQL queries for building all the reports shipped with earlier versions of
PowerCenter.

If you install version 9.6.1 HotFix 4, you cannot create a Reporting and Dashboards Service. You must use a
third-party reporting tool to run PowerCenter and Metadata Manager reports.

For information about the PowerCenter Reports, see the Informatica PowerCenter Using PowerCenter Reports
Guide. For information about the PowerCenter repository views, see the Informatica PowerCenter Repository
Guide.

Informatica Domain
This section describes changes to Informatica Domain in version 9.6.1 HotFix 4.

Domain Reports
Effective in version 9.6.1 HotFix 4, the property cores in the License Management Report is renamed to cores
per socket. This property describes the number of cores for each socket on the machine.

266 Chapter 20: New Features, Changes, and Release Tasks (9.6.1 HotFix 4)
For more information about the License Management Report, see the "Domain Reports" chapter in the
Informatica 9.6.1 HotFix 4 Administrator Guide.

Informatica Installation
This section describes the changes to the Informatica Installer in version 9.6.1 HotFix 4.

Install the Java Runtime Environment


Effective in version 9.6.1 HotFix 4, Informatica uses the Java Runtime Environment (JRE) instead of the Java
Development Kit (JDK).

Before you install or upgrade Informatica on AIX, HP-UX, or zLinux, you must first install the Java runtime
environment (JRE) and set the INFA_JRE_HOME environment variable. When you upgrade, remove the
INFA_JDK_HOME environment variable.

For more information, see the "Install the Java Runtime Environment" chapter in the Informatica 9.6.1 HotFix
4 Installation and Configuration Guide and the Informatica upgrade guides.

Informatica Transformations
This section describes changes to Informatica transformations in version 9.6.1 HotFix 4.

Address Validator Transformation


This section describes the changes to the Address Validator transformation.

The Address Validator transformation contains the following updates to address functionality:

Address validation engine upgrade

Effective in version 9.6.1 HotFix 4, the Address Validator transformation uses version 5.8.1 of the
Informatica Address Verification software engine. The engine enables the features that Informatica adds
to the Address Validator transformation in version 9.6.1 HotFix 4.

Previously, the transformation used version 5.7.0 of the Informatica AddressDoctor software engine.

Product name change

Informatica Address Verification is the new name of Informatica AddressDoctor. Informatica


AddressDoctor became Informatica Address Verification in version 5.8.0.

Changes to geocode options for United Kingdom addresses

Effective in version 9.6.1 HotFix 4, you can select Rooftop as a geocode data property to retrieve
rooftop-level geocodes for United Kingdom addresses.

Previously, you selected the Arrival Point geocode data property to retrieve rooftop-level geocodes for
United Kingdom addresses.

If you upgrade a repository that includes an Address Validator transformation, you do not need to
reconfigure the transformation to specify the Rooftop geocode property. If you specify rooftop geocodes
and the Address Validator transformation cannot return the geocodes for an address, the transformation
does not return any geocode data.

Support for unique property reference numbers in United Kingdom input data

Effective in version 9.6.1 HotFix 4, the Address Validator transformation has a UPRN GB input port and a
UPRN GB output port.

Previously, the transformation had a UPRN GB output port.

Changes (9.6.1 HotFix 4) 267


Use the input port to retrieve a United Kingdom address for a unique property reference number that you
enter. Use the UPRN GB output port to retrieve the unique property reference number for a United
Kingdom address.

For more information, see the Informatica 9.6.1 HotFix 4 Address Validator Port Reference.

Metadata Manager
This section describes changes to Metadata Manager in version 9.6.1 HotFix 4.

Certificate Validation for Command Line Programs


Effective in version 9.6.1 HotFix 4, when you configure a secure connection for the Metadata Manager web
application, the Metadata Manager command line programs do not accept security certificates that have
errors. The property that controls whether a command line program can accept security certificates that have
errors is removed.

Previously, the [Link] property in the [Link] file controlled


certificate validation for mmcmd or mmRepoCmd. You could configure the property to either accept all
certificates or accept only certificates that do not have errors.

Because the command line programs no longer accept security certificates that have errors, the
[Link] property is obsolete. The property no longer appears in the
[Link] files for mmcmd or mmRepoCmd.

For more information about certificate validation for mmcmd and mmRepoCmd, see the "Metadata Manager
Command Line Programs" chapter in the Informatica 9.6.1 HotFix 4 Metadata Manager Administrator Guide.

Changes to Security
This section describes changes to security in version 9.6.1 HotFix 4.

Transport Layer Security (TLS)


Effective in version 9.6.1 HotFix 4, Informatica uses TLS v1.1 and v1.2 to encrypt traffic. Additionally,
Informatica disabled support for TLS v1.0 and lower.

The changes affect secure communication within the Informatica domain, secure connections to web
application services, and connections between the Informatica domain to an external destination.

Release Tasks (9.6.1 HotFix 4)


This section describes the release tasks in version 9.6.1 HotFix 4.

Metadata Manager
This section describes release tasks for Metadata Manager in version 9.6.1 HotFix 4.

Verify the Truststore File for Command Line Programs


Effective in version 9.6.1 HotFix 4, when you configure a secure connection for the Metadata Manager web
application, the Metadata Manager command line programs do not accept security certificates that have

268 Chapter 20: New Features, Changes, and Release Tasks (9.6.1 HotFix 4)
errors. The property that controls whether a command line program can accept security certificates that have
errors is removed.

The [Link] property in the [Link] file controlled certificate


validation for mmcmd or mmRepoCmd. You could set the property to one of the following values:

• NO_AUTH. The command line program accepts the digital certificate, even if the certificate has errors.
• FULL_AUTH. The command line program does not accept a security certificate that has errors.
The NO_AUTH setting is no longer valid. The command line programs now only accept security certificates
that do not contain errors.

If a secure connection is configured for the Metadata Manager web application, and you previously set the
[Link] property to NO_AUTH, you must now configure a truststore file. To configure
mmcmd or mmRepoCmd to use a truststore file, edit the [Link] file that is associated
with mmcmd or mmRepoCmd. Set the [Link] property to the path and file name of the truststore
file.

For more information about the [Link] files for mmcmd and mmRepoCmd, see the
"Metadata Manager Command Line Programs" chapter in the Informatica 9.6.1 HotFix 4 Metadata Manager
Administrator Guide.

Release Tasks (9.6.1 HotFix 4) 269


Chapter 21

New Features, Changes, and


Release Tasks (9.6.1 HotFix 3)
This chapter includes the following topics:

• New Features (9.6.1 HotFix 3), 270


• Changes (9.6.1 HotFix 3), 274
• Release Tasks (9.6.1 HotFix 3), 277

New Features (9.6.1 HotFix 3)


This section describes new features in version 9.6.1 HotFix 3.

Business Glossary
This section describes new Business Glossary features in version 9.6.1 HotFix 3.

Delete Draft Assets


Effective in version 9.6.1 HotFix 3, you can delete draft assets before you publish them for the first time. You
cannot delete assets that are in the review, published, or rejected phases. You cannot delete drafts after you
revise published or rejected assets.

For more information, see the Informatica 9.6.1 HotFix 3 Business Glossary Guide.

Cross Glossary Relationships


Effective in version 9.6.1 HotFix 3, you can create relationships between assets from any glossary. You can
link business terms across glossaries. You can link a policy from any glossary to a business term. You can
view assets from across glossaries in the relationship view diagram. When you import or export a glossary,
you can choose to import or export linked assets from other glossaries.

For more information, see the Informatica 9.6.1 HotFix 3 Business Glossary Guide.

270
Create Hyperlinks from URLs
Effective in version 9.6.1 HotFix 3, you can create hyperlinks when you insert URLs in the Description, Usage
Context, Example, and Reference Table URL properties for business terms. You can link to assets from any
glossary.

For more information, see the Informatica 9.6.1 HotFix 3 Business Glossary Guide.

Informatica Data Services


This section describes new Informatica Data Services features in version 9.6.1 HotFix 3.

Query datetime data from Microsoft Access

Effective in version 9.6.1 HotFix 3, you can query an SQL data service that contains datetime data from
Microsoft Access. When you configure the Informatica Data Services ODBC Driver, enter the following
parameter in the Optional Parameters field in the Configure Data Source to Informatica Data Services
dialog box:
APPLICATION=ACCESS
When you configure the ODBC driver with this parameter, the Data Integration Service uses the date/time
data type for Microsoft Access date data.

Informatica Transformations
This section describes new Informatica transformation features in version 9.6.1 HotFix 3.

Address Validator Transformation


This section describes the new Address Validator transformation features.

Support for locality and neighborhood identification codes in Belgium addresses

Effective in version 9.6.1 HotFix 3, you can configure the Address Validator transformation to return a
code that uniquely identifies the neighborhood that contains a Belgium address. To return the code,
select the NIS Code output port. Find the port in the BE Supplementary port group.

The NIS Code port returns the five-digit NIS code that identifies the locality and a four-character code
that identifies the neighborhood within the locality. The national statistics directorate in Belgium defines
the codes.

To return the data on the NIS Code port, the Address Validator transformation reads supplementary
address reference data for Belgium. To verify that the Address Validator transformation can read the
supplementary data, add the Supplementary BE Status output port to the transformation. Informatica
adds the NIS Code port, the Supplementary BE Status port, and the BE Supplementary port group in
version 9.6.1 HotFix 3.

Support for Federal Information Addressing System identifiers in Russian Federation addresses

Effective in version 9.6.1 HotFix 3, you can configure the Address Validator transformation to return the
Federal Information Addressing System identifier for an address in the Russian Federation. To return the
identifier, select the FIAS ID output port. Find the port in the RU Supplementary port group.

The FIAS ID port returns up to 36 characters. The Federal State Statistics Service of the Russian
Federation maintains the identifier data.

To return the data on the FIAS ID port, the Address Validator transformation reads supplementary
address reference data for the Russian Federation. To verify that the Address Validator transformation
can read the supplementary data, add the Supplementary RU Status output port to the transformation.

New Features (9.6.1 HotFix 3) 271


Informatica adds the FIAS ID port, the Supplementary RU Status port, and the RU Supplementary port
group in version 9.6.1 HotFix 3.

Support for unique property reference numbers in Great Britain addresses

Effective in version 9.6.1 HotFix 3, you can configure the Address Validator transformation to return the
unique property reference number for an address in Great Britain. The number uniquely identifies the plot
of land that contains an address in the United Kingdom. To return the unique property reference number,
select the UPRN output port. Find the port in the UK Supplementary port group.

The unique property reference number contains 12 digits. The Ordnance Survey of Great Britain
maintains the unique property reference numbers.

To return the data on the UPRN port, the Address Validator transformation reads supplementary address
reference data for the Great Britain. To verify that the Address Validator transformation can read the
supplementary data, add the Supplementary UK Status output port to the transformation. Informatica
adds the UPRN port in version 9.6.1 HotFix 3.

Ability to remove locality and province descriptors from China and Japan addresses

Effective in version 9.6.1 HotFix 3, you can configure the Address Validator transformation to remove
locality descriptors and province descriptors from addresses in China and Japan. For example, the
Address Validator transformation can return Chaoyang instead of Chaoyangqu and Beijing instead of
Beijingshi in Chinese addresses.

To remove the descriptors, configure the Preferred Language property and the Preferred Script property
on the transformation.

Ability to validate Bulgaria addresses in Cyrillic script

Effective in version 9.6.1 HotFix 3, you can validate Bulgaria addresses in the Cyrillic script. By default,
the Address Validator transformation returns the results in the Cyrillic script.

To receive the results in the Latin script, configure the Preferred Script property on the transformation.

Ability to validate Slovakia addresses that contain street name abbreviations

Effective in version 9.6.1 HotFix 3, you can validate Slovakia addresses that contain major street name
abbreviations.

The transformation replaces the abbreviations with the names that the postal authority specifies in the
valid address output.

Ability to retrieve province ISO codes in batch, interactive, and fast completion modes

Effective in version 9.6.1 HotFix 3, the Address Validator transformation extends support for ISO 3166-2
province codes to the following countries:

• Canada
• France
• United States

For example, the transformation returns the province code NC, which identifies North Carolina, for the
following address:
15501 WESTON PKWY STE 150
CARY 27513
USA
For more information, see the Informatica 9.6.1 HotFix 3 Address Validator Port Reference and the Informatica
9.6.1 HotFix 3 Developer Transformation Guide.

Metadata Manager
This section describes new Metadata Manager features in version 9.6.1 HotFix 3.

272 Chapter 21: New Features, Changes, and Release Tasks (9.6.1 HotFix 3)
Metadata Source Versions
Effective in version 9.6.1 HotFix 3, some metadata sources have new supported versions.

The following metadata sources have new supported versions:

• Cloudera Navigator
• ERwin
• Informix
For more information about supported metadata source versions, see the PCAE Metadata Manager XConnect
Support Product Availability Matrix on Informatica Network:
[Link]

Cloudera Navigator Resources


Effective in version 9.6.1 HotFix 3, you can enable incremental loading and create search queries to decrease
the amount of time it takes for Metadata Manager to load Cloudera Navigator resources.

You can configure the following properties when you create or edit a Cloudera Navigator resource:

Enable incremental load

Enables incremental loading for Cloudera Navigator resources after the first successful resource load.
When you enable this option, Metadata Manager loads recent changes to the metadata instead of
loading complete metadata.

During an incremental load, Metadata Manager extracts only the following entities:

• HDFS entities that were created or changed after the previous resource load
• All Hive tables, views, and partitions
• Operation executions that were created after the previous resource load
• All templates related to the new operation executions

Search query

Query that limits the HDFS entities that Metadata Manager extracts. By default, Metadata Manager does
not extract HDFS entities from certain directories that contain only canary files, log files, history files, or
deleted files. You can update the default search query to prevent Metadata Manager from extracting
other HDFS entities. The query that you enter must use valid Cloudera Navigator search syntax.

For more information about Cloudera Navigator resources, see the Informatica 9.6.1 HotFix 3 Metadata
Manager Administrator Guide.

Microsoft SQL Server Resources


Effective in version 9.6.1 HotFix 3, Metadata Manager extracts the value of the MS_Description extended
property for Microsoft SQL Server table and view columns.

For more information about extracting extended properties for Microsoft SQL Server resources, see the
Informatica 9.6.1 HotFix 3 Metadata Manager Administrator Guide.

PowerExchange Adapters for PowerCenter


This section describes new PowerCenter adapter features in version 9.6.1 HotFix 3.

New Features (9.6.1 HotFix 3) 273


PowerExchange for SAP Netweaver
Effective in version 9.6.1 HotFix 3, you can set the AddQuotesForCachedLookup custom session property to
Yes. This ensures that sessions do not fail when you use HANA table metadata that contains special
characters, symbols, or lowercase characters in cached lookups.

PowerExchange for Greenplum


Effective in version 9.6.1 HotFix 3, you can configure the MAX_LINE_LENGTH attribute in the session
properties when you load data to a column. This ensures that you can load data to a column with precision
104857600.

Changes (9.6.1 HotFix 3)


This section describes changes in version 9.6.1 HotFix 3.

Business Glossary
This section describes changes to Business Glossary in version 9.6.1 HotFix 3.

Business Glossary Export File


Effective in version 9.6.1 HotFix 3, the order of worksheets in the Business Glossary export file is rearranged.
The worksheets that are not recommended to be altered in Microsoft Excel are hidden. The first worksheet is
a home page and it provides a brief description of other worksheets in the export file.

Previously, the export file did not have hidden worksheets and a home page.

Business Glossary Security


Effective in version 9.6.1 HotFix 3, a user who is assigned the Manage Glossaries privilege in the Analyst tool
for a particular glossary cannot perform user and role management for any other glossary.

Previously, a user who was assigned the Manage Glossaries privilege in the Analyst tool could modify the
permissions and privileges of a user for any glossary.

Glossary Import
Effective in version 9.6.1 HotFix 3, when you import a glossary that is not present in Business Glossary, the
Analyst tool creates the glossary during import. When you import a glossary, the Analyst tool automatically
populates the custom properties which are present in the glossary with values from the export file. The
Analyst tool also attaches the custom properties to the relevant templates, even if the custom properties
were not attached to any template before the import process.

Previously, if wanted to import a glossary that was not present in Business Glossary, you first needed to
create the glossary in the Analyst tool before importing the glossary contents from the export file. The
Analyst tool did not populate the custom properties with information from the export file, when they were not
attached to any template.

274 Chapter 21: New Features, Changes, and Release Tasks (9.6.1 HotFix 3)
Synonyms
Effective in version 9.6.1 HotFix 3, synonyms in business terms have the following changed behavior:

• You can remove or modify the Retirement Date that you have set for the Synonym property.
• You do not have to use the date picker to set the Create Date and Retirement Date. You can manually set
the date, but it must be in the format determined by the locale of the installation.
• You can see the Create Date of a synonym when you open a business term.
Previously, you could not remove or modify the retirement date. You could only use the date picker to set the
date. You could not view the date of creation in the business term.

Informatica Transformations
This section describes the changes to the Informatica transformations in version 9.6.1HotFix 3.

Address Validator Transformation


This section describes the changes to the Address Validator transformation.

• Effective in version 9.6.1 HotFix 3, the Address Validator transformation uses version 5.7.0 of the
Informatica Address Doctor software engine. The engine enables the features that Informatica adds to the
Address Validator transformation in version 9.6.1 HotFix 3.
Previously, the transformation used version 5.6.0 of the Informatica Address Doctor software engine.
• Effective in version 9.6.1 HotFix 3, you can configure the Address Validator transformation to return the
locality information in Switzerland addresses in French, German, or Italian. To set the language, use the
Preferred Language property.
Previously, the Address Validator transformation returned all information in a Switzerland address in the
main language of the region to which the address belonged.
• Effective in version 9.6.1 HotFix 3, the Address Validator transformation returns rooftop-level geocodes
for addresses in the United Kingdom that do not include house numbers or building number.
Previously, the transformation returned rooftop-level geocodes for United Kingdom addresses that include
house numbers or building numbers.

Data Processor Transformation


This section describes the changes to the Data Processor transformation.

XmlToXlsx with Template


The XmlToXlsx document processor converts XML documents to Microsoft Excel .xlsx format. Effective in
version 9.6.1 HotFix 3, the XmlToXlsx document processor can optionally use an .xlsx template with the XML
document to generate the .xlsx document.

Previously, you could generate an .xlsx document based on an XML document.

Metadata Manager
This section describes changes to Metadata Manager in version 9.6.1 HotFix 3.

Changes (9.6.1 HotFix 3) 275


Business Glossary Resources
Effective in version 9.6.1 HotFix 3, Business Glossary resources have behavior changes.

Business Glossary resources have the following behavior changes:

Privileges required to load Business Glossary resources

Effective in 9.6.1 HotFix 3, to load Business Glossary resources, you need the Load Resource, Manage
Resource, and View Model privileges.

Previously, to load Business Glossary resources, you needed the Load Resource and Manage Models
privileges for the Metadata Manager Service.

Migrating related catalog objects after upgrade

Effective in version 9.6.1 HotFix 3, do not run the mmcmd migrateBGLinks command after you upgrade a
business glossary from version 9.5.x. The migrateBGLinks command restores related catalog objects for
upgraded business glossaries. The command now runs automatically the first time that you load a
Business Glossary resource after upgrade.

Previously, you had to run the migrateBGLinks command as the last step in the upgrade process for
business glossaries.

Related catalog objects for categories

Effective in version 9.6.1 HotFix 3, you cannot create related catalog objects for categories. You can still
create related catalog objects for business terms.

Previously, you could relate categories to other categories or to business glossaries in Metadata
Manager, but you could not relate categories to other metadata objects. If you did create category to
category or category to glossary relationships in Metadata Manager, Metadata Manager did not update
these relationships in the Analyst tool business glossary.

To create term to term, term to category, category to term, or category to category relationships, use the
Analyst tool.

Property names that contain special characters

Effective in 9.6.1 HotFix 3, Metadata Manager can load Business Glossary resources that contain
custom properties with special characters in the name. However, Metadata Manager does not extract
custom properties that contain special characters in the name.

Specifically, Metadata Manager does not extract custom properties with names that contain any of the
following special characters:

~ ' & * ( ) [ ] | \ : ; " ' < > , ? /

Previously, if you tried to load a Business Glossary resource that contained custom properties with any
of these characters in the name, the load failed.

Microsoft SQL Server Integration Services Resources


Effective in version 9.6.1 HotFix 3, the property that controls how Metadata Manager displays lineage for
Script components that are used as transformations is renamed to Hide transformation scripts.

Previously, the property was called Transformation scripts.

SAP PowerDesigner Resources


Effective in version 9.6.1 HotFix 3, Sybase PowerDesigner resources are called SAP PowerDesigner
resources.

276 Chapter 21: New Features, Changes, and Release Tasks (9.6.1 HotFix 3)
Permissions
Effective in version 9.6.1 HotFix 3, permissions control which resources that users can access on the Load
tab as well as the Browse tab. To perform an action on a resource, a user needs both the appropriate
privilege and the appropriate permission on the resource.

For example, to view a resource on the Load tab, a user needs the View Resource privilege and read
permission on the resource. To load a resource, a user needs the Load Resource privilege and write
permission on the resource. To edit a resource, a user needs the Manage Resource privilege and write
permission on the resource.

Because of this change, the resources that a user sees on the Load tab match the resources that the user
sees on the Browse tab. The user no longer sees all resources on the Load tab unless the user has at least
read privilege on all resources.

Previously, permissions determined which resources and metadata objects that users could access on the
Browse tab, but they did not affect the Load tab. Permissions for the Browse tab are not changed.

Metadata Manager Reports


Effective in version 9.6.1 HotFix 3, when you restart the domain, you no longer have to recycle the Metadata
Manager Service to enable the View Reports button. If the domain contains a Reporting and Dashboards
Service, the View Reports button is always enabled.

Previously, when you restarted the domain, you had to recycle the Metadata Manager Service to enable the
View Reports button.

Security
This section describes changes to security in version 9.6.1 HotFix 3.

Effective in version 9.6.1 HotFix 3, Informatica dropped support for SSL keys that use fewer than 512 bits if
they use RSA encryption. This change affects secure communication within the Informatica domain and
secure connections to web application services.

If your SSL keys are affected by this change, you must generate new RSA encryption based SSL keys with
more than 512 bits or use an alternative encryption algorithm. Then, use the new keys to create the files
required for secure communication within the domain or for secure connections to web application services.
For more information about the files required for secure communication within the Informatica domain or
secure connections, see the Informatica Security Guide.

Previously, Informatica supported RSA encryption based SSL keys that use fewer than 512 bits.

Release Tasks (9.6.1 HotFix 3)


This section describes the release tasks in version 9.6.1 HotFix 3.

Metadata Manager
This section describes release tasks for Metadata Manager in version 9.6.1 HotFix 3.

Release Tasks (9.6.1 HotFix 3) 277


Permissions Associated with Load Privileges
Effective in version 9.6.1 HotFix 3, permissions control which resources that users can access on the Load
tab as well as the Browse tab. A user with any privilege in the Load privilege group requires permissions to
perform actions on a particular resource. For example, to load a resource, a user needs Load Resource
privilege and write permission on the resource.

After you upgrade to or apply 9.6.1 HotFix 3, you must verify permissions for each user that has privileges in
the Load privilege group. If a user does not have the appropriate permissions on a resource, the user cannot
view, load, or manage the resource.

The following table lists the privileges and permissions required to manage an instance of a resource in the
Metadata Manager warehouse:

Privilege Includes Privileges Permission Description

View Resource - Read User is able to perform the following actions:


- View resources and resource properties in the
Metadata Manager warehouse.
- Export resource configurations.
- Download the Metadata Manager Agent installer.

Load Resource View Resource Write User is able to perform the following actions:
- Load metadata for a resource into the Metadata
Manager warehouse.*
- Create links between objects in connected resources
for data lineage.
- Configure search indexing for resources.
- Import resource configurations.

Manage Schedules View Resource Write User is able to perform the following actions:
- Create and edit schedules.
- Add schedules to resources.

Purge Metadata View Resource Write User is able to remove metadata for a resource from the
Metadata Manager warehouse.

Manage Resource - Purge Metadata Write User is able to create, edit, and delete resources.
- View Resource

* To load metadata for Business Glossary resources, the Load Resource, Manage Resource, and View Model privileges
are required.

Configure permissions on the Security tab of the Metadata Manager application. For more information about
configuring permissions, see the Informatica 9.6.1 HotFix 3 Metadata Manager Administrator Guide.

278 Chapter 21: New Features, Changes, and Release Tasks (9.6.1 HotFix 3)
Chapter 22

New Features, Changes, and


Release Tasks (9.6.1 HotFix 2)
This chapter includes the following topics:

• New Features (9.6.1 HotFix 2), 279


• Changes (9.6.1 HotFix 2), 290
• Release Tasks (9.6.1 HotFix 2), 295

New Features (9.6.1 HotFix 2)


This section describes new features in version 9.6.1 HotFix 2.

Big Data
This section describes new big data features in version 9.6.1 HotFix 2.

Informatica Analyst
Big Data Edition has the following new features and enhancements for the Analyst tool:

Analyst tool integration with Hadoop

Effective in version 9.6.1 HotFix 2, you can enable the Analyst tool to communicate with a Hadoop
cluster on a specific Hadoop distribution. You must configure the JVM Command Line Options for the
Analyst Service.

For more information, see the Informatica 9.6.1 HotFix 2 Application Services Guide.

Analyst tool connections

Effective in version 9.6.1 HotFix 2, you can use the Analyst tool to connect to Hive or HDFS sources and
targets.

For more information, see the Informatica 9.6.1 HotFix 2 Analyst User Guide.

Data Warehousing
Big Data Edition has the following new features and enhancements for data warehousing:

279
Binary Data Type

Effective in version 9.6.1 HotFix 2, a mapping in the Hive environment can process expression functions
that use binary data.

For more information, see the Informatica 9.6.1 HotFix 2 Big Data Edition User Guide.

Timestamp and Date Data Type

Effective in version 9.6.1 HotFix 2, PowerExchange for Hive supports the Timestamp and Date data
types.

For more information, see the Informatica 9.6.1 HotFix 2 Big Data Edition User Guide.

File Format

Effective in version 9.6.1 HotFix 2, you can use the Data Processor transformation to read Parquet input
or output.

Apache Parquet is a columnar storage format that can be processed in a Hadoop environment. Parquet
is implemented to address complex nested data structures, and uses a record shredding and assembly
algorithm.

For more information, see the Informatica 9.6.1 HotFix 2 Data Transformation User Guide.

Data Lineage
Effective in version 9.6.1 HotFix 2, you can perform data lineage analysis on big data sources and targets.
You can create a Cloudera Navigator resource to extract metadata for big data sources and targets and
perform data lineage analysis on the metadata.

For more information, see the Informatica 9.6.1 HotFix 2 Metadata Manager Administrator Guide.

Hadoop Ecosystem
Big Data Edition has the following new features and enhancements for the Hadoop ecosystem:

Hadoop Distributions

Effective in version 9.6.1 HotFix 2, Big Data Edition added support for the following Hadoop distributions:

• Cloudera CDH 5.2


• Hortonworks HDP 2.2
• IBM BigInsights [Link]
• Pivotal HD 2.1

Big Data Edition dropped support for the following Hadoop distributions:

• Cloudera CDH 5.0


• Cloudera CDH 5.1
• Hortonworks HDP 2.1
• Pivotal HD 1.1

For more information, see the Informatica 9.6.1 HotFix 2 Big Data Edition Installation and Configuration
Guide.

Effective in version 9.6.1 HotFix 2, Big Data Edition supports Cloudera CDH clusters on Amazon EC2.

280 Chapter 22: New Features, Changes, and Release Tasks (9.6.1 HotFix 2)
Kerberos Authentication

Effective in version 9.6.1 HotFix 2, you can configure user impersonation for the native environment.
Configure user impersonation to enable different users to run mappings or connect to big data sources
and targets that use Kerberos authentication.

For more information, see the Informatica 9.6.1 Big Data Edition User Guide.

Performance Optimization
Big Data Edition has the following new features for performance optimization:

Compress data on temporary staging tables

Effective in version 9.6.1 HotFix 2, you can enable data compression on temporary staging tables to
optimize performance when you run a mapping in the Hive environment. When you enable data
compression on temporary staging tables, mapping performance might increase.

To enable data compression on temporary staging tables, you must configure the Hive connection to use
the codec class name that the Hadoop cluster uses. You must also configure the Hadoop cluster to
enable compression on temporary staging tables.

For more information, see the Informatica 9.6.1 HotFix 2 Big Data Edition User Guide.

Parallel sort

Effective in version 9.6.1 HotFix 2, when you use a Sorter transformation in a mapping, the Data
Integration Service enables parallel sorting by default when it pushes the mapping logic to the Hadoop
cluster.

For more information, see the Informatica 9.6.1 HotFix 2 Big Data Edition User Guide.

Profile Run on Hadoop Sources in Informatica Analyst


Effective in version 9.6.1 HotFix 2, you can create and run a column profile, rule profile, and data domain
discovery on Hive and HDFS sources in the Analyst tool.

For more information, see the Informatica 9.6.1 HotFix 2 Big Data Edition User Guide.

Business Glossary
This section describes new Business Glossary features in version 9.6.1 HotFix 2.

Refresh Asset

Effective in version 9.6.1 HotFix 2, you can refresh an asset in the Glossary workspace. Refresh the asset
to view updates to the properties that content managers made after you opened the asset.

For more information, see the Informatica 9.6.1 HotFix 2 Business Glossary Guide.

Alert for Duplicate Asset Name

Effective in version 9.6.1 HotFix 2, the Analyst tool displays an alert when you try to create an asset with
a name that already exists in the glossary. You can ignore the alert and create the asset with a duplicate
name.

For more information, see the Informatica 9.6.1 HotFix 2 Business Glossary Guide.

LDAP Authentication in Business Glossary Desktop

Effective in version 9.6.1 HotFix 2, you can use an LDAP domain when you configure server settings to
enable the Business Glossary Desktop client to reference the business glossary on a machine that hosts
the Analyst Service.

New Features (9.6.1 HotFix 2) 281


For more information, see the Informatica 9.6.1 HotFix 2 Business Glossary Desktop Installation and
Configuration Guide.

Command Line Programs


This section describes new and changed commands and options for the Informatica command line programs
in version 9.6.1 HotFix 2.

isp Command
Effective in version 9.6.1 HotFix 2, the following table describes an updated isp command:

Command Description

UpdateGrid Contains the following new option:


-ul. Optional. Updates the current node list with the values in the -nl option instead of replacing the list
of nodes previously assigned to the grid. If true, infacmd updates the node list with the list of nodes
specified using the -nl option along with the nodes previously assigned to the grid. If false, infacmd
replaces the node list with the list of nodes specified using the -nl option. Default is false.
Contains the following updated option:
-nl. Required. Names of the nodes that you want to assign to the grid. This list of nodes replaces or
updates the list of nodes previously assigned to the grid based on the -ul option defined.
If you specify the -ul option, the -nl option updates the list of nodes previously assigned to the grid. If
you do not specify the -ul option, the -nl option replaces the list of nodes previously assigned to the grid.

Data Quality Accelerators


This section describes new accelerator features in version 9.6.1 HotFix 2.

Updated reference data sets

Effective in version 9.6.1 HotFix 2, Informatica updates the reference data sets that the accelerator rules
use to analyze and enhance data.

For more information, see the Informatica Data Quality 9.6.1 HotFix 2 Accelerator Guide.

Informatica Developer
This section describes new Informatica Developer features in version 9.6.1 HotFix 2.

Microsoft SQL Server Datetime2 Data Type

Effective in version 9.6.1 HotFix 2, Informatica Developer supports the Microsoft SQL Server Datetime2
data type. The Datetime2 data type can store a range of values from Jan 1, 0001 A.D. [Link] to Dec 31,
9999 A.D. [Link].9999999.

Informatica Domain
This section describes new Informatica domain features in version 9.6.1 HotFix 2.

Informatica on Amazon EC2

Effective in version 9.6.1 HotFix 2, you can setup and launch Informatica services with multiple nodes on
Amazon EC2. You can launch an Informatica domain that contains up to four nodes.

282 Chapter 22: New Features, Changes, and Release Tasks (9.6.1 HotFix 2)
Informatica DiscoveryIQ

Effective in version 9.6.1 HotFix 2, Informatica DiscoveryIQ, a product usage tool, sends routine reports
on data usage and system statistics to Informatica. Data collection and upload is enabled by default.
You can choose to not send any usage statistics to Informatica.

Informatica Transformations
This section describes new Informatica transformation features in version 9.6.1 HotFix 2.

Address Validator Transformation


This section describes the new features on the Address Validator transformation in version 9.6.1 HotFix 2.

Support for Taiwan addresses in the Mandarin Traditional Chinese script

Effective in version 9.6.1 HotFix 2, you can use the Address Validator transformation to validate Taiwan
addresses in the Mandarin Traditional Chinese script. You can use ports from the Discrete or Multiline
group to define the input address.

To enter a Mandarin Traditional Chinese address on single line, use the Formatted Address Line 1 port.

Enhancements to United States address validation

Effective in version 9.6.1 HotFix 2, the Address Validator transformation returns the county name when
the address contains a valid ZIP code and locality. The transformation can add the county name
regardless of an Ix match status for the address. The transformation adds the name to a Province
output port. If the state identifier is absent from the address, the transformation adds the state identifier
to a Province port.

When you validate an address that contains hyphenated house numbers, the transformation moves the
second part of the house number to a Sub-building port.

Configurable output format for element descriptors

Effective in version 9.6.1 HotFix 2, you can configure the Address Validator transformation to specify the
output format for the following elements:

• Street, building, and sub-building descriptors in Australia and New Zealand addresses
• Street descriptors in German addresses.

By default, the transformation returns the descriptor that the reference database specifies for the
address. To specify the output format for the descriptors, configure the Global Preferred Descriptor
property on the transformation.

Support for Address Key codes in United Kingdom Addresses

Effective in version 9.6.1 HotFix 2, you can return the address key for a United Kingdom address. The
address key is an eight-digit numeric code identifies the address in the Postcode Address File from the
Royal Mail. To add the address key to an address, select the Address Key port. To return the address key,
the transformation reads supplementary reference data for the United Kingdom.

Extended data support for Japan

Effective in version 9.6.1 HotFix 2, the Address Validator transformation can validate Ban or block
information in a Japan address. The Address Validator transformation writes the data to the Street
Name 2 port or an equivalent port for dependent street data.

New Features (9.6.1 HotFix 2) 283


A Japanese address lists the address elements in order of size, from the largest or most general unit to
the smallest or most specific unit. The Ban element follows the Chome element and precedes the Go
element in the address.

Enhancements to Japan address validation

Effective in version 9.6.1 HotFix 2, you can configure the Address Validator transformation to add the
Gaiku code to a Japanese address. To add the code to the address, select the Gaiku Code port.

You can combine the current Choumei Aza code and the Gaiku code in a single string and return the
address that the codes identify. To return the complete address, select the Choumei Aza and Gaiku Code
JP port and configure the transformation to run in address code lookup mode.

The Japanese reference data contains the Gaiku code, the current Choumei Aza code, and any earlier
version of the Choumei Aza code for the address. When you set the Matching Extended Archive property
to ON, the transformation writes all of the codes to the output address.

Support for seven-digit postal codes in Israel

Effective in version 9.6.1 HotFix 2, the Address Validator transformation supports the seven-digit postal
codes that Israel Post defines for addresses in Israel. The seven-digit postal codes replace the five-digit
postal codes that Israel post previously defined. For example, the seven-digit postal code for Nazareth in
Israel is 1623726. Previously, the postal code for Nazareth was 16237.

Enhancement to address validation in Germany, Austria, and Switzerland

Effective in version 9.6.1 HotFix 2, the Address Validator transformation recognizes keywords, such as
Zimmer and App, in the Street Number ports for addresses from Germany, Austria, and Switzerland. The
Address Validator transformation writes the keywords to sub-building ports in the output address.

Support for the IRIS code in French addresses

Effective in version 9.6.1 HotFix 2, you can configure the Address Validator transformation to add the
IRIS code to an address in France. To add the code to the address, select the INSEE-9 Code output port.

An IRIS code uniquely identifies a statistical unit in a commune in France. INSEE, or the National Institute
for Statistics and Economic Research in France, defines the codes. France has approximately 16,000
IRIS units.

Support for rooftop geocoding in the United Kingdom

Effective in version 9.6.1 HotFix 2, you can configure the Address Validator transformation to return
rooftop-level geocodes for United Kingdom addresses. Rooftop geocodes identify the center of the
primary building on a site or a parcel of land.

To generate the rooftop geocodes, set the Geocode Data Type property on the transformation to Arrival
Point. You must also install the Arrival Point reference data for the United Kingdom.

Improved address reference data for Spain

Effective in version 9.6.1 HotFix 2, Informatica updates the address reference data for Spain. The
Address Validator transformation can use the address reference data to validate sub-building-level
information in Spanish addresses.

Improved address validation and address reference data for Turkey

Effective in version 9.6.1 HotFix 2, Informatica updates the address reference data for Turkey.

The Address Validator transformation can also perform the following operations when it validates
Turkish addresses:

• The transformation can identify a building name and a street name on the Delivery Address Line 1
port.

284 Chapter 22: New Features, Changes, and Release Tasks (9.6.1 HotFix 2)
• The transformation adds a slash symbol (/) between a building element and a sub-building element
when the sub-building element is a number.

Improved address validation for Brazil

Effective in version 9.6.1 HotFix 2, Informatica adds the following improvements to address validation
for addresses in Brazil:

• The Address Validator transformation can add a third level of sub-building information to the Delivery
Address Line and Formatted Address Line ports. The Brazil address system contains three levels of
sub-building information.
• The Address Validator transformation validates kilometer information on the Street Additional Info
port.
Note: The Address Validator transformation uses a comma, and not a decimal point, in kilometer
information for Brazil.

For more information, see the Informatica 9.6.1 HotFix 2 Address Validator Port Reference and the Informatica
9.6.1 HotFix 2 Developer Transformation Guide.

Data Processor Transformation


This section describes the new features in the Data Processor transformation in version 9.6.1 HotFix 2:

RunMapplet

The RunMapplet action calls and runs a mapplet as part of a Data Processor transformation. The output
of RunMapplet is read into the data holder specified in the RunMapplet action. Use the RunMapplet
action to perform tasks such as data masking, data quality, data lookup, and other activities usually
related to relational transformations.

Validation Rules Editor

You can use the Validation Rules editor to create user-defined rules that validate XML data. If the data
violates the rules, the action generates an XML validation report.

Parquet Input or Output

Use the New Transformation wizard to create a Data Processor transformation with Parquet input or
output.

Create an XMap Variable for the XMap Source or Target


You can create an XMap variable to serve as the XMap source or target.

For more information, see the Informatica 9.6.1 HotFix 2 Data Transformation User Guide.

Metadata Manager
This section describes new Metadata Manager features in version 9.6.1 HotFix 2.

Cloudera Navigator Resources


Effective in version 9.6.1 HotFix 2, you can create and configure a Cloudera Navigator resource to extract
metadata from the metadata component of Cloudera Navigator. You can create one Cloudera Navigator
resource for each Hadoop cluster that is managed by Cloudera Manager.

For more information about creating and configuring Cloudera Navigator resources, see the Informatica 9.6.1
HotFix 2 Metadata Manager Administrator Guide.

New Features (9.6.1 HotFix 2) 285


For more information about supported metadata source versions, see the PCAE Metadata Manager XConnect
Support Product Availability Matrix on Informatica Network:
[Link]

Microsoft SQL Server Integration Services (SSIS) Resources


Effective in version 9.6.1 HotFix 2, you can create and configure a Microsoft SQL Server Integration Services
resource to extract metadata from Microsoft SQL Server Integration Services packages. Metadata Manager
can extract metadata from packages in the Microsoft SQL Server repository or from a package in a package
(.dtsx) file.

For more information about creating and configuring Microsoft SQL Server Integration Services resources,
see the Informatica 9.6.1 HotFix 2 Metadata Manager Administrator Guide.

For more information about supported metadata source versions, see the PCAE Metadata Manager XConnect
Support Product Availability Matrix on Informatica Network:
[Link]

Embarcadero ERStudio Resources


Effective in version 9.6.1 HotFix 2, you can prevent Metadata Manager from importing attachments from
Embarcadero ERStudio. Attachments are also called user-defined properties, or UDPs. To prevent Metadata
Manager from importing UDPs, enable the Skip UDP Extraction property when you configure the resource.

For more information about configuring Embarcadero ERStudio resources, see the Informatica 9.6.1 HotFix 2
Metadata Manager Administrator Guide.

PowerCenter Resources
Effective in version 9.6.1 HotFix 2, you can create and load a PowerCenter resource when the PowerCenter
repository database type is IBM DB2 for LUW and the database user name differs from the schema name. To
specify a schema name that differs from the database user name, enter the schema name in the Schema
Name property when you configure the PowerCenter resource.

For more information about configuring PowerCenter resources, see the Informatica 9.6.1 HotFix 2 Metadata
Manager Administrator Guide.

PowerCenter Flat Files in the Impact Summary


Effective in version 9.6.1 HotFix 2, the impact summary lists the flat files that are used in PowerCenter
resources.

For more information about viewing the impact summary, see the Informatica 9.6.1 HotFix 2 Metadata
Manager User Guide.

PowerCenter
This section describes new PowerCenter features in version 9.6.1 HotFix 2.

286 Chapter 22: New Features, Changes, and Release Tasks (9.6.1 HotFix 2)
PowerCenter Upgrade
Effective in version 9.6.1 HotFix 2, PowerCenter preserves the [Link] file when you upgrade from a hotfix
or a base release of the same version. The upgrade operation preserves an [Link] file in the server/bin
directory and creates an empty configuration file named [Link] in the same directory.

When you upgrade from an earlier PowerCenter version, the upgrade operation writes an empty [Link] file
to the server/bin directory. The upgrade operation creates a backup copy of any [Link] file that it finds in
the directory.

For more information, see the Informatica 9.6.1 HotFix 2 Upgrade Guides.

PowerExchange
This section describes new PowerExchange features in version 9.6.1 HotFix 2.

PowerExchange infacmd pwx Commands


A new parameter is available for some PowerExchange Logger Service infacmd pwx commands.

The infacmd pwx CreateLoggerService and infacmd pwx UpdateLoggerService commands can now include
the following optional startup parameter in the -StartParameters option:

encryptepwd=encryption_password

A password in encrypted format that enables the encryption of PowerExchange Logger log files. When
this password is specified, the PowerExchange Logger can generate a unique encryption key for each
Logger log file. The password is stored in the CDCT file in encrypted format. The password is not stored
in CDCT backup files and is not displayed in CDCT reports that you generate with the PowerExchange
PWXUCDCT utility. To use this encryption password, you must also specify coldstart=Y in the
-StartParameters option.

For more information, see the Informatica 9.6.1 HotFix 2 Command Reference.

Encryption of PowerExchange Logger Log Files


You can now encrypt PowerExchange Logger Service log files to prevent unauthorized access to sensitive
data that is stored in the log files.

To enable log-file encryption for a PowerExchange Logger Service, specify an encryption password in the
startup parameters for a cold start of the PowerExchange Logger Service. You enter the encryption password
in one of the following ways:

• In the infacmd pwx CreateListenerService or infacmd pwx UpdateListenerService command, add the
encryptepwd parameter in the -StartParameters option.
• In the Informatica Administrator, edit the PowerExchange Logger Service configuration properties. In the
Start Parameters property, add the encryptepwd parameter.

Note: The PowerExchange Logger uses AES encryption algorithms. You can set the type of AES algorithm in
the ENCRYPTOPT statement of the PowerExchange Logger configuration file.

PowerExchange Adapters
This section describes new PowerExchange adapter features in version 9.6.1 HotFix 2.

New Features (9.6.1 HotFix 2) 287


PowerExchange Adapters for Informatica
This section describes new Informatica adapter features in version 9.6.1 HotFix 2.

PowerExchange for Cassandra


Effective in version 9.6.1 HotFix 2, you can tune consistency levels when you read data from or write data to
a Cassandra database. Consistency level determines how data is synchronized on all replicas. Based on your
requirement of data accuracy or response time, you can set the required consistency level.

For more information, see the Informatica PowerExchange for Cassandra 9.6.1 HotFix 2 User Guide.

PowerExchange for LinkedIn


Effective in version 9.6.1 HotFix 2, PowerExchange for LinkedIn secures all API calls to LinkedIn by using
HTTPS URLs.

For more information, see the Informatica PowerExchange for LinkedIn 9.6.1 HotFix 2 User Guide.

PowerExchange for DataSift


Effective in version 9.6.1 HotFix 2, PowerExchange for DataSift has the following new features and
enhancements:

• You can retrieve data from the DataSift buffer.


• You can pause and resume the Historics query.
• You can set the maximum number of attempts to re-establish a connection to DataSift if a connection
fails.
For more information, see the Informatica PowerExchange for DataSift 9.6.1 HotFix 2 User Guide.

PowerExchange for Hive


Effective in version 9.6.1 HotFix 2, PowerExchange for Hive has the following new features and
enhancements:

• You can use the user-defined functions in Informatica to transform the Binary data type in a Hive
environment.
• PowerExchange for Hive processes sources and targets that contain the Timestamp data type. The
Timestamp data type format is YYYY-MM-DD HH:MM:[Link]. The Timestamp data type has a
precision of 29 and a scale of 9.
• PowerExchange for Hive processes sources and targets that contain the Date data type. The Date data
type has a range of 0000-01-01 to 9999-12-31. The format is YYYY-MM-DD. The Date data type has a
precision of 10 and a scale of 0.
For more information, see the Informatica PowerExchange for Hive 9.6.1 HotFix 2 User Guide.

PowerExchange for MongoDB


Effective in version 9.6.1 HotFix 2, the MongoDB ODBC driver creates a virtual table for each column that
contain arrays and nested arrays. You can use the MongoDB ODBC driver to read up to five levels of nested
columns and write up to three levels of nested columns.

For more information, see the Informatica PowerExchange for MongoDB 9.6.1 HotFix 2 User Guide.

PowerExchange for Salesforce


Effective in version 9.6.1 HotFix 2, PowerExchange for Salesforce has the following new features and
enhancements:

• You can configure PowerExchange for Salesforce to capture changed data from a Salesforce object that
is replicateable and contains the CreatedDate and SysModstamp fields.

288 Chapter 22: New Features, Changes, and Release Tasks (9.6.1 HotFix 2)
• You can use PowerExchange for Salesforce to connect to Salesforce API v30 and v31.
• The Data Integration Service can push Filter transformation logic to Salesforce sources.

For more information, see the Informatica PowerExchange for Salesforce 9.6.1 HotFix 2 User Guide.

PowerExchange Adapters for PowerCenter


This section describes new PowerCenter adapter features in version 9.6.1 HotFix 2.

PowerExchange for Cassandra


Effective in version 9.6.1 HotFix 2, you can tune consistency levels when you read data from or write data to
a Cassandra database. Consistency level determines how data is synchronized on all replicas. Based on your
requirement of data accuracy or response time, you can set the required consistency level.

For more information, see the Informatica PowerExchange for Cassandra 9.6.1 HotFix 2 User Guide for
PowerCenter.

PowerExchange for MongoDB


Effective in version 9.6.1 HotFix 2, the MongoDB ODBC driver creates a virtual table for each column that
contain arrays and nested arrays. You can use the MongoDB ODBC driver to read up to five levels of nested
columns and write up to three levels of nested columns.

For more information, see the Informatica PowerExchange for MongoDB 9.6.1 HotFix 2 User Guide for
PowerCenter.

PowerExchange for Salesforce Analytics


Effective in version 9.6.1 HotFix 2, you can use PowerExchange for Salesforce Analytics to write data to
Salesforce Analytics. You can then run queries on the Salesforce Analytics database to analyze the data.

For more information, see the Informatica PowerExchange for Salesforce Analytics 9.6.1 HotFix 2 User Guide
for PowerCenter.

PowerExchange for Vertica


Effective in version 9.6.1 HotFix 2, you can perform the following tasks with PowerExchange for Vertica:

• You can create Vertica targets in the Target Designer.


• You can use relational mode to read large volumes of data from a Vertica source. To read data in
relational mode, you must create a Vertica relational connection and configure the session to use a
relational reader.
• You can use relational mode to update or delete data in a Vertica target. To write data in relational mode,
you must create a Vertica relational connection and configure the session to use a relational writer.
• When you use bulk mode to write large volumes of data to a Vertica target, you can configure the session
to create a staging file. On UNIX operating systems, when you enable file staging, you can also compress
the data in a GZIP format. By compressing the data, you can reduce the size of data that is transferred
over the network and improve session performance.
• You can run sessions on a grid to improve session performance.
• The PowerCenter Integration Service can push transformation logic to Vertica sources and targets that
use native drivers. For more information, see the Informatica PowerCenter 9.6.1 HotFix 2 Advanced
Workflow Guide.

For more information, see the Informatica PowerExchange for Vertica 9.6.1 HotFix 2 User Guide for
PowerCenter.

New Features (9.6.1 HotFix 2) 289


Workflows
This section describes new workflow features in version 9.6.1 HotFix 2.

Pushdown Optimization for Amazon Redshift


Effective in version 9.6.1 HotFix 2, the PowerCenter Integration Service can push transformation logic to
Amazon Redshift sources and targets when the connection type is ODBC.

For more information, see the Informatica PowerCenter 9.6.1 HotFix 2 Advanced Workflow Guide.

Support for Teradata Array Insert


Effective in version 9.6.1 HotFix 2, when you use an ODBC connection to connect to a Teradata target, you
can insert arrays of data into the Teradata target instead of inserting data row by row. Inserting arrays of
data results in higher session performance.

To insert arrays of data into a Teradata target by using an ODBC connection, configure the
OptimizeTeradataWrite custom property at the session level or at the PowerCenter Integration Service level
and set its value to 1.

For more information, see the Informatica PowerCenter 9.6.1 HotFix 2 Workflow Basics Guide.

Changes (9.6.1 HotFix 2)


This section describes changes in version 9.6.1 HotFix 2.

Connectivity
This section describes changes to connectivity in version 9.6.1 HotFix 2.

Sybase IQ External Loader Connection Attributes


Effective in version 9.6.1 HotFix 2, PowerCenter supports connectivity to Sybase IQ database version 16.0 by
default. Informatica dropped support for the following Sybase IQ external loader connection attributes
because Sybase IQ does not support these connection attributes from version 16.0:

• Block factor
• Block size

If you upgrade to version 9.6.1 HotFix 2 and want to use the block factor and block size connection attributes
while connecting to a Sybase IQ database version that is earlier than 16.0, configure the
SybaseIQPre16VersionSupport custom property and set its value to Yes.

290 Chapter 22: New Features, Changes, and Release Tasks (9.6.1 HotFix 2)
Informatica Analyst
The following changes apply to Informatica Analyst:

• Effective in 9.6.1 HotFix 2, the Analyst tool displays the full name of the user who owns or most recently
updated a Model repository object. The full name appears in any location that identifies the user, for
example in the asset details in the library workspace.
Previously, the Analyst tool displayed the login name of the user in the library workspace and in other
locations.
To view the full name, the login name, and any email address stored for the user, place the cursor on the
full name.
• Effective in 9.6.1 HotFix 2, you can select the full name of the user in filter operations in the Analyst tool.
Previously, you selected the login name of the user in filter operations in the Analyst tool.

Informatica Transformations
This section describes changes to Informatica transformations in version 9.6.1 HotFix 2.

Address Validator Transformation


The following changes apply to the Address Validator Transformation:

• Effective in version 9.6.1 HotFix 2, the Address Validator transformation uses version 5.6.0 of the
Informatica Address Doctor software engine. The engine enables the new features that you can use in the
Address Validator transformation in version 9.6.1 HotFix 2.
Previously, the transformation used version 5.5.0 of the Informatica Address Doctor software engine.
• Effective in version 9.6.1 HotFix 2, the Address Validator transformation can return county information
and sub-building information when you validate United States address data in suggestion list mode. The
transformation returns the county information on a Province 2 port. The transformation returns the sub-
building information on a sub-building port.
The transformation continues to return county information and sub-building information when you
validate the address data in batch mode, certified mode, and interactive mode.
Previously, the transformation did not return the information for United States address data in suggestion
list mode.
• Effective in version 9.6.1 HotFix 2, the National Institute of Statistics and Economic Studies Code port
name changes to INSEE 9-Code. You do not need to update the configuration of an Address Validator
transformation that uses the National Institute of Statistics and Economic Studies Code port.
• Effective in version 9.6.1 HotFix 2, all Locality Complete ports, Locality Name ports, and Locality Preferred
Name ports have a precision of 100.
Previously, the ports had a precision of 50.

Data Processor Transformation


Effective in version 9.6.1 HotFix 2, a Data Processor transformation that converts hierarchical input to
relational output has significantly improved performance.

To further increase performance for XML input, you can clear the Normalize XML Input setting in the Settings
tab when XML input is already normalized.

Changes (9.6.1 HotFix 2) 291


Decision Transformation
Effective in version 9.6.1 HotFix 2, you can set a maximum precision of 1024 on the REPLACESTR() function
in the Decision transformation.

Previously, you set a maximum precision of 512 on the function.

Metadata Manager
This section describes changes to Metadata Manager in version 9.6.1 HotFix 2.

Business Glossary Resources


Effective in version 9.6.1 HotFix 2, business glossary resources have the following changes:

• When you load a business glossary resource, Metadata Manager extracts published business terms in
unpublished categories. Previously, Metadata Manager did not extract a published business term when
the category to which the term belongs was unpublished.
• Metadata Manager no longer displays audit trail information for business terms and categories. To view
audit trail information for business terms or categories, view the object history in the Analyst tool.

Metadata Manager Command Line Programs


Effective in version 9.6.1 HotFix 2, Metadata Manager repository commands have behavior changes or
changed command options. Additionally, some commands are moved from the mmcmd command line
program to the mmRepoCmd command line program.

The following mmRepoCmd command has changed behavior:

restoreRepository

Restores Metadata Manager repository contents from a back-up file. You can restore repository contents
to an empty repository. Previously, you had to create repository contents before you could run this
command. The options for this command are not changed.

The following commands are moved from mmcmd to mmRepoCmd:

createRepository

Creates the Metadata Manager warehouse tables and imports models for metadata sources into the
Metadata Manager repository. You must enable the Metadata Manager Service before you can run this
command.

You can run this command from an mmRepoCmd instance that is installed with the Informatica services,
Informatica client, or Informatica utilities. Previously, you could run this command from an mmRepoCmd
instance that was installed with the Informatica services.

The options for this command are changed. You enter command options for the Metadata Manager user
instead of for the domain user. Also, you no longer have to enter command options for the PowerCenter
repository. The Metadata Manager Service process restores the PowerCenter repository content when
you start the Metadata Manager service.

292 Chapter 22: New Features, Changes, and Release Tasks (9.6.1 HotFix 2)
The following table describes new command options:

Option Description

-url Host name and port number of the Metadata Manager Service that runs the Metadata
Manager application.

--user Metadata Manager user name.

--encryptedPassword Encrypted password flag for the Metadata Manager user password.

--password Password for the Metadata Manager user.

--namespace Name of the security domain to which the Metadata Manager user belongs.

The following table describes command options that are removed:

Option Description

--securityDomain Name of the security domain to which the Informatica domain user belongs.

--domainUser User name used to connect to the Informatica domain.

--domainPassword Password for the Informatica domain user.

-pcRepositoryName Name of the PowerCenter repository that contains the metadata objects used to load
metadata into the Metadata Manager warehouse.

-pcRepositoryUser User account for the PowerCenter repository. Use the repository user account you
configured for the Repository Service.

-pcRepositoryNamespace Name of the security domain to which the PowerCenter repository user belongs.

-pcRepositoryPassword Password for the PowerCenter repository user.

-restorePCRepository Restore the repository back-up file for the PowerCenter repository to create the
objects used by Metadata Manager in the PowerCenter repository database.

The following table describes changed command options:

Option Description

--keyTab This option specifies the path and file name of the keytab file for the Metadata Manager user instead
of for the domain user.

deleteRepository

Deletes Metadata Manager repository content, including all metadata and repository database tables.

You can run this command from an mmRepoCmd instance that is installed with the Informatica services,
Informatica client, or Informatica utilities. Previously, you could run this command from an mmRepoCmd
instance that was installed with the Informatica services.

The options for this command are changed. You enter command options for the Metadata Manager user
instead of for the domain user.

Changes (9.6.1 HotFix 2) 293


The following table describes new command options:

Option Description

-url Host name and port number of the Metadata Manager Service that runs the Metadata
Manager application.

--user Metadata Manager user name.

--encryptedPassword Encrypted password flag for the Metadata Manager user password.

--password Password for the Metadata Manager user.

--namespace Name of the security domain to which the Metadata Manager user belongs.

The following table describes command options that are removed:

Option Description

--securityDomain Name of the security domain to which the Informatica domain user belongs.

--domainUser User name used to connect to the Informatica domain.

--domainPassword Password for the Informatica domain user.

The following table describes changed command options:

Option Description

--keyTab This option specifies the path and file name of the keytab file for the Metadata Manager user instead
of for the domain user.

restorePCRepository

Restores a PowerCenter repository back-up file that contains Metadata Manager objects to the
PowerCenter repository database. You must run this command from an mmRepoCmd instance that is
installed with the Informatica services. The options for this command are not changed.

Metadata Manager Privileges


Effective in version 9.6.1 HotFix 2, the privileges that you need to create or restore the Metadata Manager
repository are changed.

To create or restore the Metadata Manager repository, you must belong to the default Administrator group.
Previously, you needed the Manage Services privilege with permission on the Metadata Manager Service.

Metadata Manager Product Name


Effective in version 9.6.1 HotFix 2, the product name that appears in the Metadata Manager web application
is changed to Metadata Manager. Previously, the product name was Metadata Manager & Business Glossary.

294 Chapter 22: New Features, Changes, and Release Tasks (9.6.1 HotFix 2)
PowerExchange Adapters
This section describes changes to PowerExchange Adapters in version 9.6.1 HotFix 2.

PowerExchange for Vertica


Effective in version 9.6.1 HotFix 2, the following changes apply to pushdown optimization with
PowerExchange for Vertica:

• When you push the DATE_DIFF function to Vertica, Vertica rounds the date difference value to the nearest
integer. However, the PowerCenter Integration Service returns a float value. If you want the date
difference to be treated as a float value in the Vertica database, you can disable pushdown optimization.
• When you specify the format as Y and push the DATE_DIFF function to Vertica, Vertica calculates the
difference in the dates in terms of number of days. However, the PowerCenter Integration Service
calculates the difference in terms of number of years. If you want the difference value to be treated in
terms of number of years, you can disable pushdown optimization.

Release Tasks (9.6.1 HotFix 2)


This section describes the release tasks in version 9.6.1 HotFix 2.

Metadata Manager
This section describes release tasks for Metadata Manager in version 9.6.1 HotFix 2.

HDFS Data Objects in Informatica Platform Resources


Effective in version 9.6.1 HotFix 2, Metadata Manager adds a class for HDFS data objects in Informatica
Platform resources. Metadata Manager displays a new icon for objects of this class. The new class and icon
differentiate HDFS data objects from flat file data objects.

To display the new class and icon, reload any Informatica Platform resource that includes HDFS data objects.

Release Tasks (9.6.1 HotFix 2) 295


Chapter 23

New Features, Changes, and


Release Tasks (9.6.1 HotFix 1)
This chapter includes the following topics:

• New Features (9.6.1 HotFix 1), 296


• Changes (9.6.1 HotFix 1), 304
• Release Tasks (9.6.1 HotFix 1), 307

New Features (9.6.1 HotFix 1)


This section describes new features in version 9.6.1 HotFix 1.

Big Data
This section describes new big data features in version 9.6.1 HotFix 1.

Data Warehousing
Big Data Edition has the following new features and enhancements for data warehousing:
Binary Data Type

Effective in version 9.6.1 HotFix 1, a mapping in the Hive environment can process binary data when it
passes through the ports in a mapping. However, the mapping cannot process expression functions that
use binary data.

For more information, see the Informatica 9.6.1 HotFix 1 Big Data Edition User Guide.

Truncate Partitions in a Hive Target

Effective in version 9.6.1 HotFix 1, the Data Integration Service can truncate the partition in the Hive
target. You must choose to both truncate the partition in the Hive target and truncate the target table.

For more information, see the Informatica 9.6.1 HotFix 1 Big Data Edition User Guide.

Hadoop Distributions
Effective in version 9.6.1 HotFix 1, Big Data Edition added support for the following Hadoop distributions:

• Cloudera CDH 5.1


• Hortonworks HDP 2.1
Big Data Edition dropped support for Hortonworks HDP 2.0.

296
For more information, see the Informatica 9.6.1 HotFix 1 Big Data Edition Installation and Configuration Guide.

Hadoop Ecosystem
Big Data Edition has the following new features and enhancements for the Hadoop ecosystem:

Cloudera Manager

Effective in version 9.6.1 HotFix 1, you can use Cloudera Manager to distribute the Big Data Edition
installation as parcels across the Hadoop cluster nodes for Cloudera CDH 5.1.

For more information, see the Informatica 9.6.1 HotFix 1 Big Data Edition Installation and Configuration
Guide.

High Availability

Effective in version 9.6.1 HotFix 1, you can enable the Data Integration Service and the Developer tool to
read from and write to a highly available Hadoop cluster. A highly available Hadoop cluster can provide
uninterrupted access to the JobTracker, NameNode, and ResourceManager in the cluster. You must
configure the Developer tool to communicate with a highly available Hadoop cluster on a Hadoop
distribution.

For more information, see the Informatica 9.6.1 HotFix 1 Big Data Edition Installation and Configuration
Guide.

Kerberos Authentication

Effective in version 9.6.1 HotFix 1, you can configure the Informatica domain that uses Kerberos
authentication to run mappings in a Hadoop cluster that also uses Kerberos authentication. You must
configure a one-way cross-realm trust to enable the Hadoop cluster to communicate with the
Informatica domain.

Previously, you could run mappings in a Hadoop cluster that used Kerberos authentication if the
Informatica domain did not use Kerberos authentication.

For more information, see the Informatica 9.6.1 HotFix 1 Big Data Edition User Guide.

Schedulers

Effective in version 9.6.1 HotFix 1, the following schedulers are valid for Hadoop distributions:

• Capacity scheduler
• Fair scheduler

For more information, see the Informatica 9.6.1 HotFix 1 Big Data Edition Installation and Configuration
Guide.

Business Glossary
This section describes new Business Glossary features in version 9.6.1 HotFix 1.

Export Relationship View Diagram

Effective in version 9.6.1 HotFix 1, you can export the relationship view diagram after you open it. Export
the relationship view diagram to access the diagram when you are not logged in to the Analyst tool or to
share the diagram with users who cannot access Business Glossary.

For more information, see the Informatica 9.6.1 HotFix 1 Business Glossary Guide.

Multi-valued Attributes in Business Glossary Desktop

Effective in version 9.6.1 HotFix 1, you can view multi-valued attributes in Business Glossary Desktop.
Previously, you could only view single-valued attributes. Properties such as Contains and See Also are
examples of multi-valued attributes.

New Features (9.6.1 HotFix 1) 297


Command Line Programs
This section describes new and changed commands and options for the Informatica command line programs
in version 9.6.1 HotFix 1.

pmrep Command
Effective in version 9.6.1 HotFix 1, the following table describes an updated pmrep command:

Command Description

PurgeVersion Contains the following new option:


-k (log objects not purged). Optional. Lists all the object names and versions that do not purge
although they match the purge criteria. The -k option also lists the reason that the object versions did
not purge. For example, an object version does not purge if you do not have sufficient privileges to
purge the object.

isp Commands
Effective in version 9.6.1 HotFix 1, the following table describes new isp commands:

Command Description

convertUserActivityLog Converts binary user activity logs to text or XML format.

getUserActivityLog Retrieves user activity logs in binary, text, or XML format.

migrateUsers Migrates the groups, roles, privileges and permissions of users in a native security domain
to users in one or more LDAP security domains. Requires a user migration file.

Connectivity
This section describes new connectivity features in version 9.6.1 HotFix 1.

Netezza Connectivity
Effective in version 9.6.1 HotFix 1, you can use ODBC to read data from and write data to a Netezza
database.

For more information, see the Informatica 9.6.1 HotFix 1 Developer Tool Guide.

Data Quality Accelerators


This section describes new Data Quality accelerator features in version 9.6.1 HotFix 1.

Data Cleansing Rules


Effective in version 9.6.1 HotFix 1, you can select the following rule when you add the Core accelerator to a
Model repository project:

rule_GTIN_Validation

Validates a Global Trade Item Number (GTIN). The rule validates eight-dight, twelve-digit, thirteen-digit,
and fourteen-digit numbers. The rule returns "Valid" if the check digit is correct for the number and
"Invalid" if the check digit is incorrect.

298 Chapter 23: New Features, Changes, and Release Tasks (9.6.1 HotFix 1)
Find the rule in the General_Data_Cleansing folder of the accelerator project in the Model repository.

For more information, see the Informatica 9.6.1 HotFix 1 Accelerator Guide.

Matching Rules
Effective in version 9.6.1 HotFix 1, all Data Quality accelerator rules that perform match analysis contain a
pass-through input port and a pass-through output port. Use the ports to pass unique identifiers through a
rule.

Find the rules in the Matching_Deduplication folder of the accelerator project in the Model repository.

For more information, see the Informatica 9.6.1 HotFix 1 Accelerator Guide.

Documentation
This section describes new or updated guides included with the Informatica documentation in version 9.6.1
HotFix 1.

The Informatica documentation contains the following changed guide:

Informatica Business Glossary Version 2.0 API Reference Guide

Effective in version 9.6.1 HotFix 1, a new version of the guide contains URLs and parameters of the
Business Glossary REST APIs used to develop a client application.

Informatica Developer
This section describes new Informatica Developer features in version 9.6.1 HotFix 1.

Customized Data Object Write Properties


Effective in version 9.6.1 HotFix 1, the Truncate Hive Target Partition property is added to the customized
data object write properties. This property overwrites the partition in the Hive target in which the data is being
inserted. To enable this option, you must also select the option to truncate target tables.

For more information, see the Informatica 9.6.1 HotFix 1 Developer Tool Guide.

Netezza Pushdown Optimization


Effective in version 9.6.1 HotFix 1, the Data Integration Service can push transformation logic to Netezza
sources that use native drivers.

For more information, see the Informatica 9.6.1 HotFix 1 Mapping Guide.

Secure Communication for SAP HANA


Effective in version 9.6.1 HotFix 1, you can configure secure communication to an SAP HANA database with
the SSL protocol.

Informatica Domain
This section describes new Informatica domain features in version 9.6.1 HotFix 2.

Informatica on Amazon EC2

Effective in version 9.6.1 HotFix 2, you can setup and launch Informatica services with multiple nodes on
Amazon EC2. You can launch an Informatica domain that contains up to four nodes.

New Features (9.6.1 HotFix 1) 299


Informatica DiscoveryIQ

Effective in version 9.6.1 HotFix 2, Informatica DiscoveryIQ, a product usage tool, sends routine reports
on data usage and system statistics to Informatica. Data collection and upload is enabled by default.
You can choose to not send any usage statistics to Informatica.

Informatica Transformations
This section describes new Informatica transformation features in version 9.6.1 HotFix 1.

Address Validator Transformation


Effective in version 9.6.1 HotFix 1, you can select the following ports on the Address Validator
transformation:

Input Data

Output port that contains the data elements in an input address record in a structured XML format.

Result

Output port that contains data elements that represent the data in an output address in a structured XML
format.

Find the Input Data port and the Result port in the XML port group on the transformation.

For more information, see the Informatica 9.6.1 HotFix 1 Address Validator Port Reference.

Mappings
This section describes new mapping features in version 9.6.1 HotFix 1.

Informatica Mappings
Branch Pruning Optimization Method
Effective in version 9.6.1 HotFix 1, the Data Integration Service can apply the branch pruning optimization
method. When the Data Integration Service applies the branch pruning method, it removes transformations
that do not contribute any rows to the target in a mapping.

The Developer tool enables the branch pruning optimization method by default when you choose the normal
or full optimizer level. You can disable branch pruning if the optimization does not increase performance by
setting the optimizer level to minimal or none.

For more information, see the Informatica Data Services 9.6.1 HotFix 1 Performance Tuning Guide.

Constraints
Effective in version 9.6.1 HotFix 1, the Data Integration Service can read constraints from relational sources,
logical data objects, physical data objects, or virtual tables. A constraint is a conditional expression that the
values on a data row must satisfy. When the Data Integration Service reads constraints, it might drop the
rows that do not evaluate to TRUE for the data rows based on the optimization method applied.

For more information, see the Informatica 9.6.1 HotFix 1 Mapping Guide.

300 Chapter 23: New Features, Changes, and Release Tasks (9.6.1 HotFix 1)
Metadata Manager
This section describes new Metadata Manager features in verison 9.6.1 HotFix 1.

Browser Support
Effective in version 9.6.1 HotFix 1, the Metadata Manager application can run in the following web browsers:

• Internet Explorer 11.0


• Google Chrome 35
For more information about product requirements and supported platforms, see the Product Availability
Matrix on Informatica Network:
[Link]

Microsoft SQL Server and Oracle Exadata Versions


Effective in version 9.6.1 HotFix 1, Metadata Manager supports the following database versions:

• Microsoft SQL Server 2014


• Oracle Exadata 11g
Therefore, you can perform the following actions:

• Create Microsoft SQL Server or Oracle resources that extract metadata from these database versions.
• Create Business Glossary, Informatica Platform, or PowerCenter resources when the Model repository or
PowerCenter repository is in either of these database versions.
• Create the Metadata Manager repository in either of these database versions.
For more information about creating resources, see the Informatica 9.6.1 HotFix 1 Metadata Manager
Administrator Guide. For more information about creating the Metadata Manager repository, see the
Informatica 9.6.1 HotFix 1 Installation and Configuration Guide.

Security Enhancements
Effective in version 9.6.1 HotFix 1, when you create or edit a PowerCenter resource, you can prevent
Metadata Manager from displaying secure JDBC parameters that are part of the JDBC URL for the
PowerCenter repository database.

For more information, see the Informatica 9.6.1 HotFix 1 Metadata Manager Administrator Guide.

PowerCenter
This section describes new PowerCenter features in version 9.6.1 HotFix 1.

Secure Communication for SAP HANA


Effective in version 9.6.1 HotFix 1, you can configure secure communication to an SAP HANA database with
the SSL protocol.

PowerExchange Adapters
This section describes new PowerExchange adapter features in version 9.6.1 HotFix 1.

PowerExchange Adapters for Informatica


This section describes new Informatica adapter features in version 9.6.1 HotFix 1.

New Features (9.6.1 HotFix 1) 301


PowerExchange for Cassandra
Effective in version 9.6.1 HotFix 1, you can use PowerExchange for Cassandra to read data from or write data
to a Cassandra database. You can add a Cassandra data object as a source or a target in a mapping and run
the mapping to read or write data. You can create virtual tables to use Cassandra collections in a mapping.

For more information, see the Informatica PowerExchange for Cassandra 9.6.1 HotFix 1 User Guide.

PowerExchange for Greenplum


Effective in version 9.6.1 HotFix 1, you can configure secure communication to a Greenplum database with
the SSL protocol.

For more information, see the Informatica PowerExchange for Greenplum 9.6.1 HotFix 1 User Guide.

PowerExchange for HBase


Effective in version 9.6.1 HotFix 1, you can use PowerExchange for HBase to connect to an HBase data store
that uses Kerberos authentication. You must enable Kerberos authentication and configure HBase
connection properties to access an HBase data store that uses Kerberos authentication.

For more information, see the Informatica PowerExchange for HBase 9.6.1 HotFix 1 User Guide.

PowerExchange for HDFS


Effective in version 9.6.1 HotFix 1, when you read complex files, you can use the
[Link] input format to read text files in
batches and increase performance.

For more information, see the Informatica PowerExchange for HDFS 9.6.1 HotFix 1 User Guide.

PowerExchange for Hive


Effective in version 9.6.1 HotFix 1, PowerExchange for Hive supports the Binary data type in a Hive
environment. The Binary data type has a range of 1 to 104,857,600 bytes.

For more information, see the Informatica PowerExchange for Hive 9.6.1 HotFix 1 User Guide.

PowerExchange for Salesforce


Effective in version 9.6.1 HotFix 1, you can use the PowerExchange for Salesforce connection listed under
the Cloud connection category to read data from and write data to Salesforce. You can add a Salesforce data
object operation as a source or a target in a mapping and run the mapping to read or write data.

For more information, see the Informatica PowerExchange for Salesforce 9.6.1 HotFix 1 User Guide.

PowerExchange for SAS


Effective in version 9.6.1 HotFix 1, you can use PowerExchange for SAS to read data from SAS and write data
to SAS.

For more information, see the Informatica PowerExchange for SAS 9.6.1 HotFix 1 User Guide.

PowerExchange for Tableau


Effective in version 9.6.1 HotFix 1, you can use PowerExchange for Tableau to generate the Tableau data
extract file by reading data from multiple sources, such as flat files and SAP applications. Business users can
open the extract file in Tableau Desktop to visualize the data and identify patterns and trends.

For more information, see the Informatica PowerExchange for Tableau 9.6.1 HotFix 1 User Guide.

302 Chapter 23: New Features, Changes, and Release Tasks (9.6.1 HotFix 1)
PowerExchange Adapters for PowerCenter
This section describes new PowerCenter adapter features in version 9.6.1 HotFix 1.

PowerExchange for Cassandra


Effective in version 9.6.1 HotFix 1, you can use PowerExchange for Cassandra to extract data from and load
data to a Cassandra database. You can create virtual tables to use Cassandra collections in a mapping.

For more information, see the Informatica PowerExchange for Cassandra 9.6.1 HotFix 1 User Guide for
PowerCenter.

PowerExchange for Greenplum


Effective in version 9.6.1 HotFix 1, you can configure secure communication to a Greenplum database with
the SSL protocol.

For more information, see the Informatica PowerExchange for Greenplum 9.6.1 HotFix 1 User Guide for
PowerCenter.

PowerExchange for Vertica


Effective in version 9.6.1 HotFix 1, you can use PowerExchange for Vertica to write large volumes of data to a
Vertica database.

For more information, see the Informatica PowerExchange for Vertica 9.6.1 HotFix 1 User Guide for
PowerCenter.

Reference Data
This section describes new reference data features in version 9.6.1 HotFix 1.

Probabilistic Models
Effective in version 9.6.1 HotFix 1, you can view the total number of reference data values that you assigned
to a label in a probabilistic model.

You can use wildcard characters to search for data values in a probabilistic model.

For more information, see the Informatica 9.6.1 HotFix 1 Reference Data Guide.

Rule Specifications
This section describes new rule specification features in version 9.6.1HotFix 1.

Date and Time Operations


Effective in version 9.6.1 HotFix 1, you can configure a rule statement to perform the following operations on
date and time data:

• Return the date and time at which the Data Integration Service runs the mapping that contains the rule
statement.
• Determine if a time stamp references a point in time before or after the Data Integration Service runs the
mapping that contains the rule statement.
• Convert a string of date and time data to a date/time data type.
For more information, see the Informatica 9.6.1 HotFix 1 Rule Specification Guide.

New Features (9.6.1 HotFix 1) 303


Reference Table Operations
Effective in version 9.6.1 HotFix 1, you can configure a rule statement to return a value that you specify when
an input value matches a reference table value.

For more information, see the Informatica 9.6.1 HotFix 1 Rule Specification Guide.

Changes (9.6.1 HotFix 1)


This section describes changes in version 9.6.1 HotFix 1.

Application Services
This section describes changes to application services in version 9.6.1 HotFix 1.

Content Management Service


Effective in version 9.6.1 HotFix 1, the Content Management Service sets default values for the following
Address Validation process properties:

• No Pre-Load Countries
• No Pre-Load Geocoding Countries
• No Pre-Load Suggestion List Countries
• No Pre-Load Address Code Countries
The Content Management Service sets the default value for each property to ALL.

Previously, the Content Management Service did not set default values for the properties.

Note: The default properties do not affect the data output from any address validation mapping that you
created in an earlier product version.

Business Glossary
This section describes changes to Business Glossary in version 9.6.1 HotFix 1.

Business Glossary API changes


The URLs and parameters of the Business Glossary REST APIs used to develop a client application have
changed.

Informatica Transformations
This section describes changes to Informatica transformations in version 9.6.1 HotFix 1.

304 Chapter 23: New Features, Changes, and Release Tasks (9.6.1 HotFix 1)
Address Validator Transformation
The following changes apply to the Address Validator transformation in version 9.6.1 HotFix 1:

• Effective in version 9.6.1 HotFix 1, the Address Validator transformation populates additional fields in a
Software Evaluation and Recognition Program (SERP) report. The SERP report includes the following
fields:
- Processing Date

- Date of CPC Address Data File

Previously, the transformation did not populate the fields.


• Effective in version 9.6.1 HotFix 1, the Extended Element Status port name is Extended Element Result
Status.

Data Processor Transformation


Effective in version 9.6.1 HotFix 1, you can export a Data Processor transformation with an XMap object and
import it again into the Developer tool as a transformation with an XMap object.

Previously, when you exported a Data Processor transformation with an XMap object, it was re-imported into
the Developer tool as a transformation with a Script object.

Metadata Manager
This section describes changes to Metadata Manager in version 9.6.1 HotFix 1.

Microsoft Analysis and Reporting Services Metadata Source Version


Effective in version 9.6.1 HotFix 1, you can create Microsoft Analysis and Reporting Services resources to
extract metadata from Microsoft Analysis and Reporting Services version 10.5 (2008 R2).

Previously, you could extract metadata from Microsoft Analysis and Reporting Services version 9.0 (2005).

Search
Effective in version 9.6.1 HotFix 1, the behavior for customizing the list of words to ignore in searches is
changed.

The behavior is changed in the following ways:

• You no longer need to create the [Link] file manually. Instead, the Informatica services installer
creates a default [Link] file in the following directory:
<Informatica installation directory>\services\shared\jars\pc\classes
• You must set the UseCustomStopWords property in the [Link] file to true.
The [Link] file created by the installer contains the default list of English words to ignore in searches.
To customize the word list, update the [Link] file, enable the UseCustomStopWords property, disable
and enable the Metadata Manager Service, and then manually update the search index for all resources.

Previously, to customize the word list, you had to create the [Link] file manually, disable and enable
the Metadata Manager Service, and then manually update the search index for all resources.

PowerCenter Transformations
This section describes changes to PowerCenter transformations in version 9.6.1 HotFix 1.

Changes (9.6.1 HotFix 1) 305


Data Masking Transformation
Effective in version 9.6.1 HotFix 1, you set the substitution dictionary owner name and the storage owner
name in the transaction environment properties.

Previously, you set the substitution dictionary owner name and the storage owner name in the
Transformations view on the Mapping tab in the session properties.

PowerExchange
This section describes changes to PowerExchange functionality in the Informatica domain in version 9.6.1
HotFix 1.

infacmd pwx displayStatsListener Command


Effective in version 9.6.1 HotFix 1, the infacmd pwx displayStatsListener command can produce monitoring
statistics for PowerExchange Listener processes on Linux, zLinux, and UNIX. Previously, the command
produced statistics only for PowerExchange Listener processes on Windows.

PowerExchange Adapters
This section describes changes to PowerExchange adapters in version 9.6.1 HotFix 1.

PowerExchange Adapters for Informatica


This section describes changes to Informatica adapters in version 9.6.1 HotFix 1.

PowerExchange for Salesforce


Effective in version 9.6.1 HotFix 1, the PowerExchange for Salesforce connection listed under the Enterprise
connection category is deprecated and Informatica will drop support in the next major release. Informatica
recommends that you use the new PowerExchange for Salesforce connection listed under the Cloud
connection category to read data from and write data to Salesforce.

PowerExchange for Mongo DB


Effective in version 9.6.1 HotFix 1, the name of the Informatica PowerExchange for Mongo DB ODBC driver
file is [Link].

Previously, the name of the Informatica PowerExchange for Mongo DB ODBC driver file was
[Link].

PowerExchange Adapters for PowerCenter


This section describes changes to PowerCenter adapters in version 9.6.1 HotFix 1.

PowerExchange for Mongo DB


Effective in version 9.6.1 HotFix 1, the name of the Informatica PowerExchange for Mongo DB ODBC driver
file is [Link].

Previously, the name of the Informatica PowerExchange for Mongo DB ODBC driver file was
[Link].

Reference Data
This section describes changes to reference data functionality in version 9.6.1 HotFix 1.

306 Chapter 23: New Features, Changes, and Release Tasks (9.6.1 HotFix 1)
Probabilistic Models
Effective in version 9.6.1 HotFix 1, the Developer tool uses version 3.4 of the Stanford Named Entity
Recognition API to compile a probabilistic model.

Previously, the Developer tool used version 1.2.6 of the API to compile a probabilistic model.

Release Tasks (9.6.1 HotFix 1)


This section describes the release tasks in version 9.6.1 HotFix 1.

PowerExchange Adapters
This section describes release tasks for PowerExchange adapters in version 9.6.1 HotFix 1.

PowerExchange Adapters for Informatica


This section describes release tasks for Informatica adapters in version 9.6.1 HotFix 1.

PowerExchange for Salesforce


Effective in version 9.6.1 HotFix 1, the PowerExchange for Salesforce connection listed under the Enterprise
connection category is deprecated, and Informatica will drop support in the next major release. Informatica
recommends that you use the new PowerExchange for Salesforce connection listed under the Cloud
connection category to read data from and write data to Salesforce.

You can use existing mappings with the deprecated PowerExchange for Salesforce adapter. However, you
cannot update the existing mappings or connections to use the PowerExchange for Salesforce connection
listed under the Cloud connection category. You must create new mappings and connections to use the new
PowerExchange for Salesforce adapter.

For more information, see the Informatica PowerExchange for Salesforce 9.6.1 HotFix 1 User Guide.

PowerExchange for Mongo DB


Before you upgrade from Informatica 9.6.1 to Informatica 9.6.1 HotFix 1, you must backup the [Link] file.

After you upgrade to Informatica 9.6.1 HotFix 1, replace the [Link] file with the back-up copy of the
[Link] file, and change the MongoDB driver name in the [Link] file to
[Link].

For more information, see the Informatica PowerExchange for MongoDB 9.6.1 HotFix 1 User Guide.

PowerExchange Adapters for PowerCenter


This section describes release tasks for PowerCenter adapters in version 9.6.1 HotFix 1.

PowerExchange for Mongo DB


Before you upgrade from Informatica 9.6.1 to Informatica 9.6.1 HotFix 1, you must backup the [Link] file.

After you upgrade to Informatica 9.6.1 HotFix 1, replace the [Link] file with the back-up copy of the
[Link] file, and change the MongoDB driver name in the [Link] file to
[Link].

For more information, see the Informatica PowerExchange for MongoDB 9.6.1 HotFix 1 User Guide for
PowerCenter.

Release Tasks (9.6.1 HotFix 1) 307


Informatica Web Client Applications
After you upgrade, you must clear your web browser cache before you access the Informatica web client
applications.

Informatica supports Google Chrome and Microsoft Internet Explorer browsers. After you upgrade, clear the
browser caches on the machines from which you access the Informatica web client applications. The
Informatica web client applications include the Administrator tool, Analyst tool, Reporting Service, Reporting
and Dashboards Service, and Metadata Manager.

308 Chapter 23: New Features, Changes, and Release Tasks (9.6.1 HotFix 1)
Chapter 24

New Features (9.6.1)


This chapter includes the following topics:

• Application Services, 309


• Big Data, 310
• Business Glossary, 311
• Command Line Programs, 312
• Documentation, 315
• Informatica Administrator, 316
• Informatica Developer, 316
• Informatica Development Platform, 316
• Informatica Transformations, 317
• Installer, 320
• Mappings, 320
• Metadata Manager, 321
• PowerExchange, 323
• PowerExchange Adapters, 323
• Profiles and Scorecards, 325
• Reference Data, 326
• Rule Specifications, 326
• Sources and Targets, 326
• Transformation Language Functions, 327

Application Services
This section describes new application services features in version 9.6.1.

Content Management Service


This section describes new Content Management service features in version 9.6.1

The Content Management Service determines the preload behavior for address code lookup reference data
and interactive reference data. Use the Address Validation process properties to set the preload behavior.

309
The following table describes the preload properties for address code lookup data:

Property Description

Full Pre-Load Address Code Lists the countries for which the Data Integration Service loads all reference data into
Countries memory before address validation begins.

Partial Pre-Load Address Lists the countries for which the Data Integration Service loads address reference
Code Countries metadata and indexing structures into memory before address validation begins.

No Pre-Load Address Code Lists the countries for which the Data Integration Service loads no address reference
Countries data into memory before address validation begins.

The following table describes the preload properties for interactive reference data in addition to batch and
certified reference data:

Property Description

Full Pre-Load Lists the countries for which the Data Integration Service loads all batch, certified, and
Countries interactive reference data into memory before address validation begins.

Partial Pre-Load Lists the countries for which the Data Integration Service loads batch, certified, and
Countries interactive metadata and indexing structures into memory before address validation begins.

No Pre-Load Lists the countries for which the Data Integration Service does not load batch, certified, or
Countries interactive reference data into memory before address validation begins.

For more information, see the Informatica 9.6.1 Application Service Guide.

Big Data
This section describes new Big Data features in version 9.6.1.

Data Types in a Hive Environment


You can push high precision Decimal data types to a Hive environment that uses Hive 0.11 and above.

If the mapping is not enabled for high precision, the Data Integration Service converts all decimal values to
double values.

If the mapping is enabled for high precision, the Data Integration Service converts decimal values with a
precision greater than 28 to double values.

For more information, see the Informatica 9.6.1 Big Data Edition User Guide.

Hive Connection Properties


In the Hive connection, you specify the following properties:

• Enter advanced Hive or Hadoop properties to configure or override Hive or Hadoop cluster properties in
[Link] on the machine on which the Data Integration Service runs.
• Enter the user name of the user that the Data Integration Service impersonates to run mappings on the
Hadoop cluster.

310 Chapter 24: New Features (9.6.1)


For more information, see the Informatica 9.6.1 Big Data Edition User Guide.

User Authentication
You can enable the Data Integration Service to run mapping and workflow jobs on a Hadoop cluster that uses
Kerberos authentication. The Hadoop cluster authenticates the SPN of the Data Integration Service user
account to run mapping and workflow jobs on the Hadoop cluster. To enable another user to run jobs on the
Hadoop cluster, you can configure the SPN of the Data Integration Service user account to impersonate
another user account.

For more information, see the Informatica 9.6.1 Big Data Edition User Guide.

Mappings on Hadoop Distributions


You can enable mappings to run on the following Hadoop distributions:

• Cloudera CDH 5.0


• Hortonworks HDP 2.0
• MapR 3.1
• Pivotal HD 1.1

For more information, see the Informatica 9.6.1 Big Data Edition Installation and Configuration Guide.

Business Glossary
This section describes new Business Glossary features in version 9.6.1.

Business Initiatives

A business initiative is a container of Glossary assets that you want to collectively approve and publish
in business glossary. Use a business initiative to publish multiple business terms, categories, and
policies at the same time. The business initiative goes through the same approval process as any other
Glossary asset.

Customize Category and Business Initiative Templates

You can customize templates for categories and business initiatives.

Default Values for Custom Properties

You can add default values for custom properties that you create when you customize a Glossary asset
template.

Asset Relationship Visualization

You can see a visual representation of the relationships that business terms and policies have with other
assets in business glossary. The asset relationship visualization diagram is dynamic and interactive. You
can rearrange the context of the diagram, filter the assets that display in the diagram, and change the
number of levels.

Synonym Retirement

You can set a retirement date for synonyms in business glossary. The state of the synonym changes
after the retirement date. Business glossary consumers view the state to identify the validity of the
synonym.

For more information, see the Informatica 9.6.1 Business Glossary Guide.

Business Glossary 311


Command Line Programs
This section describes new commands in version 9.6.1.

Environment Variables
The following table describes new environment variables that you can use with command line programs:

Environment Variable Description

INFA_DEFAULT_DB_TRUSTSTORE_PASSWORD Stores the database truststore password for infasetup commands.

INFA_NODE_KEYSTORE_PASSWORD Stores the password for the infa_keystore.jks file for infasetup
commands.

INFA_NODE_TRUSTSTORE_PASSWORD Stores the password for the infa_truststore.jks file for infasetup
commands.

infacmd dis Commands


The following table describes new infacmd dis commands:

Command Description

ListSequenceObjectProperties Lists the properties for a sequence data object.

ListSequenceObjects Lists the sequence data objects deployed to an application.

SetSequenceState Updates the current value of a sequence data object.

infacmd isp Commands


The following table describes a new infacmd isp command:

Command Description

printSPNAndKeytabNames Generates the list of SPN and keytab file names for the nodes and services in the
domain.

The following table describes an updated infacmd isp command:

Command Description

switchToGatewayNode The command contains an option for the database truststore file (-dbtl). Enter the path and
file name of the truststore file for the secure domain configuration repository database. The
option is required if you use a secure database for the domain configuration repository.

312 Chapter 24: New Features (9.6.1)


infacmd mrs Commands
The following table describes a new infacmd mrs command:

Command Description

rebuildDependencyGraph Rebuilds the object dependency graph so that you can view object dependencies after an
upgrade.

infacmd rds Commands


Effective in version 9.6.1, the infacmd rds commands are obsolete. You can no longer use the infacmd rds
commands to manage the Reporting and Dashboard Service. You need to use the Administrator tool.

The following table describes the obsolete infacmd rds commands:

Command Description

CreateService Creates a Reporting and Dashboards Service in a domain.

ListServiceProcessOptions Lists the Reporting and Dashboards Service process options.

infasetup Command
The following table describes a new infasetup command:

Command Description

updateKerberosConfig Changes the realm name that the Informatica domain users belong to or changes the service
realm name that the Informatica domain services belong to. This command does not change
the Kerberos configuration.

The following table describes updated infasetup commands:

Command Description

- BackupDomain The command contains an option for the database truststore (-dbtl). Enter the path and
- DefineDomain file name of the truststore file for the secure domain repository database. The option is
- DefineGatewayNode required if you configured a secure domain repository database for the domain.
- DeleteDomain
- RestoreDomain
- updateGatewayNode
- upgradeDomainMetadata

Command Line Programs 313


mmcmd
Effective in version 9.6.1, the following mmcmd commands have changes:

Command Description

createRepository The --domainPassword option is required only when the domain uses Kerberos authentication
and you do not specify the --keyTab option for the domain user. Previously, this option was
always required.

createResource The following options are added:


- --resourcePassword. If the resource uses a password and the resource configuration file
does not contain the resource password, use this option to specify the password.
- --secureJDBCParameters. Use this option to specify secure JDBC parameters to append to
the JDBC connection URL. Metadata Manager does not display secure parameters or
parameter values in the resource configuration properties.

deleteRepository The --domainPassword option is required only when the domain uses Kerberos authentication
and you do not specify the --keyTab option for the domain user. Previously, this option was
always required.

getResource The -includePassword option is added. You can include or exclude the resource password in
the resource configuration file. Previously, the command always included the password.

restorePCRepository The --domainPassword option is required only when the domain uses Kerberos authentication
and you do not specify the --keyTab option for the domain user. Previously, this option was
always required.

updateResource The following options are added:


- --resourcePassword. If the resource uses a password and the resource configuration file
does not contain the resource password, use this option to specify the password.
- --secureJDBCParameters. Use this option to specify secure JDBC parameters to append to
the JDBC connection URL. Metadata Manager does not display secure parameters or
parameter values in the resource configuration properties.

mmRepoCmd
Effective in version 9.6.1, you use the mmRepoCmd command line program to back up and restore Metadata
Manager repository database contents.

mmRepoCmd contains the following enhancements:

• When you restore repository contents, mmRepoCmd encrypts sensitive data in the Metadata Manager
repository with the domain encryption key.
• mmRepoCmd gets repository database connection information from the Metadata Manager Service.
When you run commands, you do not need to specify connection parameters as arguments.
mmRepoCmd contains the following commands:

Command Description

backupRepository Backs up the Metadata Manager repository to a backup file.

restoreRepository Restores Metadata Manager repository contents from a backup file.

Previously, you used the backupCmdLine command line program to back up and restore Metadata Manager
repository database contents. backupCmdLine is removed.

314 Chapter 24: New Features (9.6.1)


pmprep Command
The following table describes an updated pmrep command:

Command Description

createConnection The command contains the kerberized_connection (-K) option. Indicates that the database you
are connecting to runs on a network that uses Kerberos authentication.

rcfmu
Effective in version 9.6.1, you can use rcfmu to migrate resource configuration files from Metadata Manager
9.1.0, 9.5.x, and 9.6.0 to the current version. rcfmu contains a new option, -smv, that specifies the original
resource configuration file version.

Previously, you used rcfmu to migrate resource configuration files from Metadata Manager 9.1.0 to 9.5.x or
9.6.0.

rmu
Effective in version 9.6.1, you can use rmu to migrate resources from Metadata Manager 9.1.0, 9.5.x, and
9.6.0 to the current version. rmu detects the original resource version.

Previously, you used rmu to migrate resources from Metadata Manager 9.1.0 to 9.5.x or 9.6.0.

Documentation
This section describes new guides included with the Informatica documentation in version 9.6.1. Some new
guides are organized based on shared functionality among multiple products and replace previous guides.

The Informatica documentation contains the following new guides:

Informatica Big Data Edition Installation and Configuration Guide

Contains information about installing Informatica Big Data Edition and configuring mappings to work
with multiple Hadoop distributions. Previously, installation was documented in the PowerCenter Big Data
Edition User Guide.

Informatica Installation and Configuration Guide

Contains information about planning the domain, preparing databases, installing Informatica services
and clients, and creating application services for all Informatica platform products. Previously,
installation was documented in guides specific to the Data Quality, Data Services, and PowerCenter
products.
Informatica Upgrading from Version 9.6.0

Contains information about upgrading all Informatica platform products from version 9.6.0 to version
9.6.1. Previously, upgrade was documented in guides specific to the Data Quality, Data Services, and
PowerCenter products.

Informatica Upgrading from Version 9.5.1

Contains information about upgrading all Informatica platform products from version 9.5.1 to version
9.6.1. Previously, upgrade was documented in guides specific to the Data Quality, Data Services, and
PowerCenter products.

Documentation 315
Informatica Upgrading from Version 9.5.0

Contains information about upgrading all Informatica platform products from version 9.5.0 to version
9.6.1. Previously, upgrade was documented in guides specific to the Data Quality, Data Services, and
PowerCenter products.

Informatica Upgrading from Version 9.1.0

Contains information about upgrading all Informatica platform products from version 9.1.0 to version
9.6.1. Previously, upgrade was documented in guides specific to the Data Quality, Data Services, and
PowerCenter products.

Informatica PowerExchange Adapters for Informatica Release Notes

Contains important information about installation, closed enhancements, fixed limitations, and known
limitations for PowerExchange adapters for Informatica. Previously, this information was documented in
the Informatica Release Notes.

Informatica PowerExchange Adapters for PowerCenter Release Notes

Contains important information about installation, closed enhancements, fixed limitations, and known
limitations for PowerExchange adapters for Powercenter. Previously, this information was documented
in the Informatica Release Notes.

Informatica Administrator
This section describes new Informatica Administrator features in version 9.6.1.

Informatica Cloud Administration


You can use the Administrator tool to view Informatica Cloud organizations. You can monitor the status of
Secure Agents and view cloud connections used in an organization.

For more information, see the Informatica 9.6.1 Administrator Guide.

Informatica Developer
This section describes new Informatica Developer features in version 9.6.1.

Object Dependencies
In the Developer tool, you can view the object dependencies for an object in the Object Dependencies view to
perform an impact analysis on affected objects before you modify or delete the object.

For more information, see the Informatica 9.6.1 Developer Tool Guide.

Informatica Development Platform


This section describes new Informatica Development Platform features in version 9.6.1.

Informatica Connector Toolkit

316 Chapter 24: New Features (9.6.1)


After you define the run-time components of the adapter, you can use the Test Read and Test Write wizards
to test the read and write capability of the adapter. The test wizards display the test statistics, error
messages, and log files. You can debug and fix issues before you deploy the adapter to the Informatica
domain.

For more information, see the Informatica Development Platform 9.6.1 Informatica Connector Toolkit
Developer Guide.

Informatica Transformations
This section describes new transformation features in version 9.6.1.

Address Validator Transformation


This section describes new features to the Address Validator transformation that you create in the Developer
tool.

Modes
You can configure the Address Validator transformation to run in the following modes:

Address Code Lookup Mode

When you select address code lookup mode, the Data Integration Service reads an identification code
and returns the corresponding address elements from the reference data. The identification code can
refer to a locality, street, or mailbox. For example, you can enter the choumei aza code for a Japanese
address and retrieve the complete address as output.

Interactive Mode

When you select interactive mode, address validation reads a partial address and returns all addresses
from the reference data that match the input elements. Select interactive mode to add data to an
incomplete address. You can enter the partial address on a single input port.

You also can enter a partial address on a single input port when you configure the transformation to run
in suggestion list mode.

Ports
You can select the following ports for the Address Validator transformation:

Count

Output port that indicates the number of addresses in the address reference data sets that match the
data in the input address.

Count Overflow

Output port that indicates whether the reference data contains addresses that address validation does
not return to the transformation.

Gmina Code PL

Output port returns the identification code for the municipality or commune to which a Polish address
belongs.

Informatica Transformations 317


Institute of Geography and Statistics Code

Output port that contains a seven-digit identification code for the city or state to which a Brazilian
address belongs.

Locality Identifier DE

Input and output ports that contain the identification code for a German locality.

National Address Database Identifier ZA

Input and output port that contains a seven-digit identification code for the street in a South African
address.

National Institute of Statistics and Economic Studies Code

Input and output port that identifies the administrative regions to which a French address belongs. The
National Institute of Statistics and Economic Studies code is also called the INSEE code.

New Choumei Aza Code JP

Output port that returns a unique delivery point code for a Japanese mailbox.

Official Municipality Key DE

Input and output ports that contain an identification code for a German municipality.

Postal Address Code AT

Output port that contains building-level post code data for an Austrian address.

Postal Address Code RS

Output port that returns a street-level post code for a Serbian address.

Postal Code Extension

Output port that contains a two-digit suffix for the post code of a Swiss address.

Street Identifier DE

Input and output ports that contain a street-level identification code for a German address.

Supplementary status ports

Output ports that indicate if address validation can return supplementary data for an address.

The transformation includes supplementary status ports for Austria, Brazil, France, Germany, Poland,
South Africa, and Switzerland.

TERYT Locality Identifier PL

Output port that contains the identification code for the locality to which a Polish address belongs.

TERYT Street Identifier PL

Output port that contains the identification code for the street in a Polish address.

Unique Delivery Point Reference Number GB

Output port that returns a unique delivery point code for a United Kingdom mailbox.

For more information, see the Informatica 9.6.1 Address Validator Port Reference and the Informatica 9.6.1
Developer Transformation Guide.

Properties
You can configure the following advanced properties for the Address Validator transformation:

Alias Locality

The property determines whether address validation replaces a valid location alias with the official
location name.

318 Chapter 24: New Features (9.6.1)


Matching Extended Archive

The property determines whether address validation returns a unique delivery point code for an out-of-
date Japanese address.

Data Processor Transformation


This section describes new features to the Data Processor transformation that you create in the Developer
tool.

File Input for Streamer


A Data Processor transformation Streamer can use a file as input. Previously, the streamer only used a buffer
as input.

For more information, see the Informatica Data Transformation 9.6.1 User Guide.

Generate Data Transformation with AVRO or XML


You can auto-generate a Data Processor transformation with AVRO input and any format output, or Avro
output and any format input, with the New Transformation wizard. Use an Avro schema file or sample file to
define the AVRO file specification. You can also generate a transformation with both Avro input format and
output format. In this case, use separate Avro schema files or sample files to define both the input and the
output.

When you add a Data Processor transformation that reads Avro input to a mapping, you also add a complex
file reader to pass the Avro input to the transformation. For a mapping with a Data Processor transformation
that generates Avro output, you pass the output to a complex file writer.

You can also auto-generate a Data Processor transformation with XML input, output, or both, with the New
Transformation wizard. Use an .xsd schema file or a sample file to define the expected XML hierarchy.

For more information, see the Informatica Data Transformation 9.6.1 User Guide.

Generate Schema from Sample File


When you add a sample file to define a hierarchy with the New Transformation wizard or the Schema wizard,
the wizard creates an .xsd schema file to define the hierarchy. The wizard creates the schema in the Model
repository. You can use the schema with other transformations.

For more information, see the Informatica Data Transformation 9.6.1 User Guide.

Relational Mapping Keys


Keys in a relational mapping can be of type xs:string and xs:integer.

For more information, see the Informatica Data Transformation 9.6.1 User Guide.

Unread XMap Elements


You can select to track XMap input elements that you do not map to output elements. The transformation
reports unmapped elements to the Default Handler output port named XMap_Unread_Input_Values .

For more information, see the Informatica Data Transformation 9.6.1 User Guide.

Match Transformation
This section describes new features to the Match transformation that you create in the Developer tool.

You can specify whether the transformation updates a current identity index data store with index data from
a mapping data source. Use the Persistence Method option to set the update policy. Set a policy to update
the data store with any index data from the data source that the data store does not contain. Alternatively,

Informatica Transformations 319


set a policy that does not update the data store with index data. By default, the transformation updates the
data store.

For more information, see the Informatica 9.6.1 Developer Transformation Guide.

SQL Transformation
This section describes new features of the SQL transformation that you create in the Developer tool.

You can use the SQL transformation to invoke stored procedures from a Sybase database.

For more information, see the Informatica 9.6.1 Developer Transformation Guide.

Installer
This section describes new Informatica platform installer features in version 9.6.1.

Informatica Kerberos SPN Format Generator


You can run Informatica Kerberos SPN Format Generator independent of the Informatica installer. You can
start the utility from the command line or start it from the Informatica installer. The Informatica Kerberos
SPN Format Generator installs with the Informatica services. After installation you can start the utility from
the Informatica directory.

For more information, see the Informatica 9.6.1 Installation and Configuration Guide.

Service Principal Level


When you install the Informatica services with Kerberos authentication, you can set the Service Principal
Level option to specify whether nodes and services can share service principal names and keytab files. If the
domain does not require a high level of security, you can use one SPN and keytab file for the node and all the
service processes on the node. If the domain requires a high level of security, create a unique SPN and
keytab file for each node and each process on the node.

For more information, see the Informatica 9.6.1 Installation and Configuration Guide.

Mappings
This section describes new mapping features in version 9.6.1

Informatica Mappings
This section describes new features of mappings that you create in the Developer tool.

IBM DB2 Partitioning


The Data Integration Service can use multiple partitions to write to an IBM DB2 target.

For more information, see the Informatica 9.6.1 Big Data Edition User Guide.

320 Chapter 24: New Features (9.6.1)


Metadata Manager
This section describes new Metadata Manager features in version 9.6.1.

Glossary View
When you view a category or business term in the Glossary view, you can open the category or term in the
Analyst tool by clicking the View in Informatica Analyst toolbar icon.

For more information, see the Informatica 9.6.1 Metadata Manager User Guide.

Resource Properties
Effective in version 9.6.1, database management, JDBC, and Microstrategy resources have new resource
configuration properties.

Database Management Resources

The following table describes the new resource configuration property for database management
resources:

Property Description

Secure JDBC Parameters Secure JDBC parameters that you want to append to the JDBC connection URL.

JDBC Resources

The following table describes the new resource configuration property for JDBC resources:

Property Description

Case sensitivity Specifies the case sensitivity setting for the metadata source database. By default, the
Metadata Manager Agent uses the JDBC driver to determine whether the database is case
sensitive.

Microstrategy Resources

The following table describes the new resource configuration property for Microstrategy 7.0 - 9.x
resources:

Property Description

Import schema only Imports the schemas for the selected projects without the reports and documents. By
default, Metadata Manager imports the schemas, reports, and documents.

For more information, see the Informatica 9.6.1 Metadata Manager Administrator Guide.

Resource Versions
You can create resources of the following versions:

• Business Objects 14.1 (XI 4.1 SP2). Previously, you could create Business Objects resources up to version
14 (XI R4) SP6.
• Microstrategy 9.4.1. Previously, you could create Microstrategy resources up to version 9.3.1.
• Oracle 12c. Previously, you could create Oracle resources up to version 11g Release 2.
For information about creating resources, see the Informatica 9.6.1 Metadata Manager Administrator Guide.

Metadata Manager 321


Search
You can create a custom list of words and phrases to ignore in keyword and advanced searches.

For more information, see the Informatica 9.6.1 Metadata Manager Administrator Guide.

Security
Metadata Manager contains the following security enhancements:

Encryption Key Support

Metadata Manager uses the encryption key for the Informatica domain to encrypt sensitive data, such as
passwords, in the Metadata Manager repository.

For more information about the encryption key for the Informatica domain, see the Informatica 9.6.1
Security Guide.

Secure JDBC Parameters

You can prevent the Administrator tool from displaying secure JDBC parameters that are part of the
Metadata Manager repository database URL. You can also prevent Metadata Manager from displaying
secure JDBC parameters that are part of the database connection URL for some database management
resources.

You can prevent Metadata Manager from displaying secure JDBC parameters for the following database
management resources:

• IBM DB2 for LUW


• IBM Informix
• Microsoft SQL Server
• Netezza
• Oracle
• Sybase ASE
• Teradata

For information about specifying secure JDBC parameters in the Metadata Manager repository database
URL, see the Informatica 9.6.1 Application Service Guide. For information about specifying secure JDBC
parameters in the database connection URL for database management resources, see the Informatica
9.6.1 Metadata Manager Administrator Guide.

Custom Metadata Configurator

To increase security for the PowerCenter repository, the Custom Metadata Configurator prompts you for
the PowerCenter repository user name and password when you generate the mappings that extract
metadata from custom metadata files.

For more information, see the Informatica 9.6.1 Metadata Manager Custom Metadata Integration Guide.

322 Chapter 24: New Features (9.6.1)


PowerExchange
This section describes new PowerExchange features in version 9.6.1.

Listener Service
When you configure the domain to use Kerberos authentication, you can configure Informatica clients, the
Data Integration Service, and the PowerCenter Integration Service to find a PowerExchange Listener Service
in the domain.

To do so, include the optional service_name parameter in the NODE statement in the DBMOVER configuration
file on the client, Data Integration Service, or PowerCenter Integration Service machine.

For more information, see the Informatica 9.6.1 Application Service Guide.

Listener Service
This section describes new Listener Service features in version 9.6.1.

When you configure the domain to use Kerberos authentication, you can configure Informatica clients, the
Data Integration Service, and the PowerCenter Integration Service to find a PowerExchange Listener Service
in the domain.

To do so, include the optional service_name parameter in the NODE statement in the DBMOVER configuration
file on the client, Data Integration Service, or PowerCenter Integration Service machine.

For more information, see the Informatica 9.6.1 Application Service Guide.

infacmd pwx Commands


The following table describes a new infacmd pwx command:

Command Description

displayStatsListener Displays monitoring statistics for a PowerExchange Listener on Windows or z/OS.

PowerExchange Adapters
This section describes new PowerExchange adapter features in version 9.6.1.

Informatica Adapters
This section describes new Informatica adapter features.

PowerExchange for DataSift

You can extract historical data from DataSift for Twitter sources.

For more information, see the Informatica PowerExchange for DataSift 9.6.1 User Guide.

PowerExchange 323
PowerExchange for Greenplum

• You can use PowerExchange for Greenplum to load large volumes of data into Greenplum tables. You
can run mappings developed in the Developer tool. You can run the mappings in native or Hive run-
time environments.
• You can also use PowerExchange for Greenplum to load data to a HAWQ database in bulk.

For more information, see the Informatica PowerExchange for Greenplum 9.6.1 User Guide.

PowerExchange for LinkedIn

You can extract information about a group, information about posts of a group, comments about a group
post, and comments about specific posts from LinkedIn. You can also extract a list of groups suggested
for the user and a list of groups in which the user is a member from LinkedIn.

For more information, see the Informatica PowerExchange for LinkedIn 9.6.1 User Guide.

PowerExchange for HBase

You can use PowerExchange for HBase to read data in parallel from HBase. The Data Integration Service
creates multiple Map jobs to read data in parallel.

For more information, see the Informatica PowerExchange for HBase 9.6.1 User Guide.

PowerExchange for Hive

You can create a Hive connection that connects to HiveServer or HiveServer2. Previously, you could
create a Hive connection that connects to HiveServer. HiveServer2 supports Kerberos authentication and
concurrent connections.

For more information, see the Informatica PowerExchange for Hive 9.6.1 User Guide.

PowerExchange for MongoDB

You can use the Schema Editor to change the schema of MongoDB collections. You can also use virtual
tables for MongoDB collections that have nested columns.

For more information, see the Informatica PowerExchange for MongoDB 9.6.1 User Guide.

PowerExchange for Teradata Parallel Transporter API

When you load data to a Teradata table in a Hive run-time environment, you can use the Teradata
Connector for Hadoop (TDCH) to increase performance. To use TDCH to load data, add the EnableTdch
custom property at the Data Integration Service level and set its value to true.

For more information, see the Informatica PowerExchange for Teradata Parallel Transporter API 9.6.1
User Guide.

PowerCenter Adapters
This section describes new PowerCenter adapter features.

PowerExchange for LDAP

In the session properties, you can specify the path and name of the file that contains multiple filter
conditions to query the LDAP entries.

For more information, see the Informatica PowerExchange for LDAP 9.6.1 User Guide for PowerCenter.

PowerExchange for MongoDB

You can use the Schema Editor to change the schema of MongoDB collections. You can also use virtual
tables for MongoDB collections that have nested columns.

For more information, see the Informatica PowerExchange for MongoDB 9.6.1 User Guide for
PowerCenter.

324 Chapter 24: New Features (9.6.1)


PowerExchange for Netezza

• When you use bulk mode to read data from or write data to Netezza, you can override the table name
and schema name in the session properties.
• You can specify a table name prefix when you configure a session to load data to a Netezza target.
The table name prefix overrides the schema for the Netezza table.

For more information, see the Informatica PowerExchange for Netezza 9.6.1 User Guide for PowerCenter.

PowerExchange for Salesforce

• You can configure a session to use the Salesforce Bulk API to read data in bulk from a Salesforce
source.
• You can dissociate a custom child object from a standard parent object.

For more information, see the Informatica PowerExchange for Salesforce [Link].1 User Guide for
PowerCenter.

PowerExchange for SAP NetWeaver

• When you run a file mode session to read data from SAP through ABAP, you can configure the
FileCompressEnable custom property to enable compressed data transfer. When you compress data,
you can increase the session performance and decrease the disk storage that the staging file needs.
• The Source_For_BCI relational target in the BCI listener mapping that Informatica ships contains a
new column called DataSourceName. You can use this field to partition the data that the
Source_For_BCI relational target receives from SAP.
• Informatica ships an activation mapping along with the BCI_Mappings.xml file. You can use the
activation mapping to activate multiple DataSources in SAP simultaneously.
• When you use numeric delta pointers to extract business content data, you can extract the changed
data alone without doing a full transfer of the entire data.

For more information, see the Informatica PowerExchange for SAP NetWeaver 9.6.1 User Guide for
PowerCenter.

Profiles and Scorecards


This section describes new profiles and scorecards features in version 9.6.1.

Column Profile Results

When you run a column profile in the Analyst tool, you can view the following visual charts in the column
profile results:

• Pie charts that represent the value frequencies and column patterns for a column.
• A bar chart that represents the percentage of rows with null values, unique values, and non-unique
values in a column.

Drill-down Filters

In the Analyst tool, you can right-click a column value in the drill-down results and add the column value
as a filter condition.

Value of Data Quality

You can measure the value of data quality using scorecards in the Analyst tool. Define a cost unit for a
scorecard metric, assign a variable or fixed cost, and view the cost trend chart along with the score trend
chart. You can then monitor the value of data that you selected at the metric and scorecard levels.

Profiles and Scorecards 325


For more information, see the Informatica 9.6.1 Profile Guide.

Reference Data
This section describes new reference data features in version 9.6.1.

Probabilistic Models
You can perform the following tasks when you create or edit a probabilistic model in the Developer tool:

• You can assign a color to each label that you add to a probabilistic model.
• You can view the total number of labels that you assign to the data values in a row.
• You can view the total number of data values that the probabilistic model associates with a label.
For more information, see the Informatica 9.6.1 Reference Data Guide.

Rule Specifications
This section describes new rule specifications features in version 9.6.1.

You can perform the following tasks when you work with rule specifications in the Analyst tool:

• You can change the order of the rule statements in a rule set.
• You can test the operations of a single rule set.
• You can save the data that you use to test a rule set or a rule specification, and you can delete the data.
• You can specify a null value in a condition or an action in a rule statement.
• You can use data that you copy from Microsoft Excel to test a rule set or a rule specification.
For more information, see the Informatica 9.6.1 Rule Specification Guide.

Sources and Targets


This section describes new sources and targets features in version 9.6.1.

Informatica Sources and Targets


This section describes new features of sources and targets in Informatica.

HAWQ Connectivity
You can use ODBC to read data from and write data to a HAWQ database.

For more information, see the Informatica 9.6.1 Developer Tool Guide.

326 Chapter 24: New Features (9.6.1)


Data Types
Microsoft SQL Server Uniqueidentifier Data Type

Informatica Developer supports the Microsoft SQL Server Uniqueidentifier data type. The
Uniqueidentifier data type has a precision of 38 and a scale of 0.

For more information, see the Informatica 9.6.1 Developer Tool Guide.

Oracle Float Data Type

Informatica Developer supports the Oracle float data type. The float data type has a precision of 1 to 15
and a scale of 0.

For more information, see the Informatica 9.6.1 Developer Tool Guide.

PowerCenter Sources and Targets


This section describes new features of sources and targets in PowerCenter.

Oracle Sources and Targets


You can import Oracle sources and targets that use basic compression and OLTP compression. You can also
manually create source and target definitions for Oracle tables that use basic compression and OLTP
compression.

For more information, see the PowerCenter 9.6.1 Designer Guide.

Transformation Language Functions


This section describes new features of transformation language functions in version 9.6.1.

Informatica Functions
This section describes new features of Informatica functions.

ANY Function
You can use the ANY function to return any row in the selected port.

For more information, see the Informatica 9.6.1 Transformation Language Reference.

Transformation Language Functions 327


Chapter 25

Changes (9.6.1)
This chapter includes the following topics:

• Big Data, 328


• Domain, 328
• Informatica Transformations, 329
• Mappings, 330
• Metadata Manager, 330
• PowerCenter Transformations, 331
• PowerExchange Adapters, 331
• Profiles and Scorecards, 333
• Rule Specifications, 333
• Security, 333

Big Data
This section describes changes to Big Data in version 9.6.1.

Effective in version 9.6.1, you can choose not to select a Hive version for the validation environment when
you configure a mapping to run in the Hive environment.

The Data Integration Service evaluates a valid Hive version for the Hadoop cluster and validates the mapping.

Previously, you had to select a Hive version for the validation environment.

Domain
This section describes changes to the Informatica domain in version 9.6.1.

Effective in version 9.6.1, Informatica dropped support for SUSE Linux Enterprise Server 10. If any node in the
domain is on SUSE Linux Enterprise Server 10, you must migrate the node to a supported operating system
before upgrading the node to 9.6.1. For more information, see the Informatica upgrade guides.

328
Informatica Transformations
This section describes changes to Informatica transformations in version 9.6.1.

Address Validator Transformation


This section describes changes to the Address Validator transformation that you create in the Developer tool.

Effective in version 9.6.1, the Address Validator transformation uses version 5.5.0 of the Address Doctor
software engine.

Previously, the transformation used version 5.4.1 of the Address Doctor software engine.

Effective in version 9.6.1, the transformation adds a two-character country code to the following port names:

• Choumei Aza Code JP.


Previously, the port name was Choumei Aza Code.
• New Choumei Aza Code JP.
Previously, the port name was New Choumei Aza Code.
• Postal Address Code RS.
Previously, the port name was Postal Address Code.
• Unique Delivery Point Reference Number GB.
Previously, the port name was Unique Delivery Point Reference Number.

Effective in version 9.6.1, you can disable the Alias Street property on the transformation. The property
determines whether address validation replaces a street alias with the official street name.

Previously, you configured the property to replace all street aliases or to replace any term that is not a valid
street alias.

Data Masking Transformation


This section describes changes to the Data Masking transformation that you create in the Developer tool.

Key Masking Technique


Effective in version 9.6.1, the key masking algorithm is changed. A mapping created in an earlier version that
uses the key masking technique might create different masked output after upgrade to 9.6.1.

Previously, a mapping that used the key masking technique would create the same masked output when run
after upgrade.

Data Processor Transformation


This section describes changes to the Data Processor transformation that you create in the Developer tool.

Effective in version 9.6.1, you can export a Data Processor transformation to PowerCenter with pass-through
ports or a relational to hierarchical transformation. Previously, you could only export Data Processor
transformations to PowerCenter if they did not have relational input or output.

Informatica Transformations 329


Mappings
This section describes changes to mappings in version 9.6.1.

Informatica Mappings
This section describes changes to mappings that you create in the Developer tool.

Partitioned Mappings in the Native Environment


Effective in version 9.6.1, partitioned mappings in the native environment include the following changes:

IBM DB2 for LUW Relational Targets

The Data Integration Service can create partitions for a mapping when the mapping contains a DB2 for
LUW target that has more database partitions than the parallelism value. If the DB2 for LUW target has
more database partitions than the parallelism value, the Data Integration Service uses all of the writer
threads defined by the parallelism value. The Data Integration Service distributes multiple database
partitions to some of the writer threads.

Previously, if the DB2 for LUW target had more database partitions than the parallelism value, the Data
Integration Service did not create partitions for the entire mapping. The Data Integration Service used
one thread to process each mapping pipeline stage.

Mapping Maximum Parallelism

When the maximum parallelism for a mapping is Auto, the actual parallelism value equals the minimum
of the following values:

• Maximum parallelism value set for the Data Integration Service process.
• Maximum number of partitions for all flat file, IBM DB2 for LUW, and Oracle sources in the mapping.
The Data Integration Service determines the number of partitions based on the source type. The
number of partitions for a flat file source equals the maximum parallelism value set for the Data
Integration Service process. The number of partitions for a DB2 for LUW or Oracle relational source
equals the number of database partitions in the relational source.

Previously, when the maximum parallelism for a mapping was Auto, the actual parallelism value equaled
the maximum parallelism value set for the Data Integration Service process.

Metadata Manager
This section describes changes to Metadata Manager in version 9.6.1.

Resource Configuration Import and Export


Effective in version 9.6.1, there are behavior changes related to resource configuration import and export.

Password Import and Export

Effective in version 9.6.1, when you export a resource configuration through Metadata Manager or
mmcmd, you can include or exclude the encrypted resource password in the resource configuration file.
If you exclude the password, and the resource uses a password, you must enter it when you import the
resource configuration.

330 Chapter 25: Changes (9.6.1)


Previously, Metadata Manager always included the encrypted resource password in the resource
configuration file.

Privilege Changes

Effective in version 9.6.1, you can export a resource configuration if you have the View Resource
privilege. You can import a resource configuration if you have the Load Resource privilege.

Previously, to export or import a resource configuration, you needed the Load Resource privilege.

Resource Property Changes


Effective in version 9.6.1, Microstrategy 7.0 - 9.x resources have resource property changes.

The following table describes the deleted resource configuration properties for Microstrategy 7.0 - 9.x
resources:

Property Description

Data model reverse Optionally, transforms SQL joins of a model into foreign key relationships.
engineer joins

Dimensional model Optionally, reverse engineers the following dimensional objects into relational objects
reverse engineering when there is a direct match between the dimensional object and the relational object:
- The dimension name, description, and role to the underlying table
- The attribute or measure name, description, and datatype to the underlying column

PowerCenter Transformations
This section describes changes to PowerCenter transformations in version 9.6.1.

Data Masking Transformation


This section describes changes to the Data Masking transformation that you create in the PowerCenter
Client.

Key Masking Technique


Effective in version 9.6.1, the key masking algorithm is changed. A mapping created in an earlier version that
uses the key masking technique might create different masked output after upgrade to 9.6.1.

Previously, a mapping that used the key masking technique would create the same masked output when run
after upgrade.

PowerExchange Adapters
This section describes changes to PowerExchange adapters in version 9.6.1.

PowerExchange Adapters for PowerCenter


This section describes changes to PowerCenter adapters in version 9.6.1.

PowerCenter Transformations 331


PowerExchange for Salesforce
Effective in version [Link].1, PowerExchange for Salesforce includes the following changes:
End of Life for Salesforce API Versions

PowerExchange for Salesforce does not support the following Salesforce API versions:

• 7.0
• 8.0
• 16.0

Previously, PowerExchange for Salesforce supported these Salesforce API versions.

Error Logging

The PowerCenter Integration Service writes error messages to the error log for the session.

Previously, the PowerCenter Integration Service wrote error messages to both the error log and the
session log.

Java Requirements for Bulk API Target Sessions

For Bulk API target sessions, configure at least 10 to 50 MB of space for the Java temporary directory on
the PowerCenter Integration Service machine.

Previously, the Bulk API did not use the Java temporary directory when writing to Salesforce targets.

Related Object Fields No Longer Available for Import

You can no longer import fields from objects related to the following Salesforce objects:

• ActivityHistory
• EmailStatus
• Name
• OpenActivity
• OwnedContentDocument

Previously, you could import fields from objects related to these objects.

Salesforce API Version

PowerExchange for Salesforce uses version 31.0 of the Salesforce API.

Use the Salesforce service URL to configure connections to Salesforce. To use the latest version of the
Salesforce API, create an application connection or update the service URL in an existing application
connection.

Use the following version of the Salesforce service URL:


[Link]
If the new version of a Salesforce object has a different structure than the previous version of the object,
re-import the Salesforce object. After you re-import the object, analyze the associated mapping to
determine if you need to update the mapping.

Previously, PowerExchange for Salesforce used version 27.0 of the Salesforce API.

SOAP Request Logging

For sessions that read from Salesforce with the standard API, the PowerCenter Integration Service no
longer includes SOAP requests in the session log.

332 Chapter 25: Changes (9.6.1)


Previously, you could view SOAP requests in session logs when you configured the session for verbose
tracing.

Profiles and Scorecards


This section describes changes to profiles and scorecards in version 9.6.1.

Effective in version 9.6.1, the total count of unique values in column profile results does not include the null
column values.

Previously, null column values were included in the total count of unique values.

Rule Specifications
This section describes changes to rule specifications in version 9.6.1.

Effective in version 9.6.1, you can use the rule statement options to specify a data value or a null value for a
condition or action.

Previously, you opened a configuration dialog box to in the rule statement to specify a data value or a null
value.

Effective in version 9.6.1, you do not need the Informatica domain access permission to perform the
following operations:

• Test a rule set or a rule specification.


• Compile a rule specification.
Previously, you needed the Informatica domain access permission to test a rule set or a rule specification
and to compile a rule specification.

Security
This section describes changes to security in version 9.6.1.

Encryption Key Directory


Effective in version 9.6.1, the directory where the domain encryption key is stored has changed. The new
encryption key directory is <INFA_HOME>/isp/config/keys.

Previously, the encryption key directory was <INFA_HOME>/isp/config/secret.

Service Principal Requirements for Kerberos Authentication


Effective in 9.6.1, when you configure the domain to use Kerberos authentication, you can specify whether
nodes and services can share service principal names (SPN) and keytab files.

You can select one of the following service principal levels:

Profiles and Scorecards 333


Node Level

If the domain is used for testing or development and does not require a high level of security, you can set
the service principal at the node level. You can use one SPN and keytab file for the node and all the
service processes on the node. When you create additional services on a node, you do not need to create
additional keytab files.

Process Level

If the domain is used for production and requires a high level of security, you can set the service
principal at the process level. Create a unique SPN and keytab file for each node and each process on
the node. The number of SPNs and keytab files required for each node depends on the number of service
processes that run on the node.

Previously, the Informatica domain required a unique SPN and keytab file for each node and each process on
the node.

334 Chapter 25: Changes (9.6.1)


Part VI: Version 9.6.0
This part contains the following chapters:

• New Features and Enhancements (9.6.0), 336


• Changes to Informatica Data Explorer (9.6.0), 363
• Changes to Informatica Data Quality (9.6.0), 365
• Changes to Informatica Data Services (9.6.0), 369
• Changes to Informatica Data Transformation (9.6.0), 372
• Changes to Informatica Domain (9.6.0), 373
• Changes to PowerCenter (9.6.0), 376
• Changes to PowerCenter Big Data Edition (9.6.0), 378
• Changes to Metadata Manager (9.6.0), 379
• Changes to Adapters for PowerCenter (9.6.0), 383
• Changes to Adapters for Informatica (9.6.0), 387

335
Chapter 26

New Features and Enhancements


(9.6.0)
This chapter includes the following topic:

• Version 9.6.0, 336

Version 9.6.0
This section describes new features and enhancements in version 9.6.0.

Informatica Analyst
This section describes new features and enhancements to Informatica Analyst.

Informatica Analyst Interface


The Analyst tool interface has new headers and workspaces. A workspace is a web page where you perform
tasks based on licensed functionality that you access through tabs in the Analyst tool.

The Analyst tool has the following workspaces:

• Start. Access other workspaces that you have the license to access through access panels on this
workspace. If you have the license to perform exception management, your tasks appear in this
workspace.
• Glossary. Define and describe business concepts that are important to your organization.
• Discovery. Analyze the quality of data and metadata in source systems.
• Design. Design business logic that helps analysts and developers collaborate.
• Scorecards. Open, edit, and run scorecards that you created from profile results.
• Library. Search for assets in the Model repository. You can also view metadata in the Library workspace.
• Exceptions. View and manage exception record data for a task. View duplicate record clusters or
exception records based on the type of task you are working on. View an audit trail of the changes you
make to records in a task.
• Connections. Create and manage connections to import relational data objects, preview data, run a profile,
and run mapping specifications.
• Data Domains. Create, manage, and remove data domains and data domain groups.

336
• Job Status. Monitor the status of Analyst tool jobs such as data preview for all objects and drilldown
operations on profiles.
• Projects. Create and manage folders and projects and assign permissions on projects.
• Glossary Security. Manage permissions, privileges, and roles for business glossary users.

Informatica Analyst Tasks


The Analyst tool is available to multiple Informatica products and is used by business users to collaborate on
projects within an organization.

The tasks that you can perform in the Analyst tool depend on the license for Informatica products and the
privileges to perform tasks. Based on the license that your organization has, you can use the Analyst tool to
perform the following tasks:

• Define business glossaries, terms, and policies to maintain standardized definitions of data assets in the
organization.
• Perform data discovery to find the content, quality, and structure of data sources, and monitor data quality
trends.
• Define data integration logic and collaborate on projects to accelerate project delivery.
• Define and manage rules to verify data conformance to business policies.
• Review and resolve data quality issues to find and fix data quality issues in the organization.

Flat File Delimiters


When you import a delimited flat file, you can input the following non-printing multibyte characters as
delimiters: /01, /01, and /001.

For more information, see the Informatica 9.6.0 Analyst Tool Guide.

Informatica Installer
This section describes new features and enhancements to the Informatica platform installer.

Accessibility and Section 508 Compliance


The Informatica platform installer conforms to Section 508 of the Rehabilitation Act and is accessible to
people with disabilities.

Authentication
You can configure the Informatica domain to use Kerberos authentication. When you install the Informatica
services, you can enable Kerberos authentication for the domain. A page titled Domain - Network
Authentication Protocol appears in the Informatica services installer. To install the domain with Kerberos
authentication, select the option to enable Kerberos authentication and enter the required parameters.

Encryption Key
Informatica encrypts sensitive data such as passwords when it stores data in the domain. Informatica uses a
keyword to generate a unique encryption key with which to encrypt sensitive data stored in the domain.

A page titled Domain - Encryption Key appears in the Informatica services installer. If you create a node and
a domain during installation, you must specify a keyword for Informatica to use to generate a unique
encryption key for the node and domain. If you create a node and join a domain, Informatica uses the same
encryption key for the new node.

Version 9.6.0 337


Secure Communication
You can provide an SSL certificate or use the default Informatica SSL certificate to secure communication
between services in the domain. To use your SSL certificate, specify a keystore and truststore file and
password during installation.

For more information, see the Informatica 9.6.0 installation and upgrade guides.

Informatica Data Explorer


This section describes new features and enhancements to Informatica Data Explorer.

Column Profile Results


The column profile results include the sum of all values in columns with a numeric datatype.

For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

Use the TOTAL_SUM column in the following relational database views to access the profiling warehouse for
information about the sum of values in numeric columns:

• IDPV_COL_PROFILE_RESULTS
• IDPV_PROFILE_RESULTS_TRENDING

For more information, see the Informatica 9.6.0 Database View Reference.

Curation
You can curate inferred profile results in both Analyst and Developer tools. Curation is the process of
validating and managing discovered metadata of a data source so that the metadata is fit for use and
reporting. You can approve, reject, and restore datatypes. You can also approve, reject, and restore data
domains, primary keys, and foreign keys. You can hide or show rows containing rejected datatypes or data
domains. You can exclude approved datatypes, data domains, and primary keys from column profile
inference and data domain discovery inference when yo run the profile again.

For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

Use the following relational database views to access profiling warehouse for information about curated
profile results:

• IDPV_CURATED_DATATYPES
• IDPV_CURATED_DATADOMAINS
• IDPV_CURATED_PRIMARYKEYS
• IDPV_CURATED_FOREIGNKEYS

For more information, see the Informatica 9.6.0 Database View Reference.

Data Domain Discovery


You can run data domain discovery on all rows of the source data to verify the inference results for multiple
columns at the same time.

For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

Datatype Inference
You can infer multiple datatypes that match the inference criteria when you run a column profile. You can drill
down based on a column datatype in column profile results.

For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

338 Chapter 26: New Features and Enhancements (9.6.0)


Use the following relational database views to access profiling warehouse for information on inferred
datatypes:

• IDPV_DATATYPES_INF_RESULTS
• IDPV_DATATYPE_FREQ_TRENDING

For more information, see the Informatica 9.6.0 Database View Reference.

Discovery Search
Discovery search finds assets and identifies relationships to other assets in the databases and schemas of
the enterprise. You can use discovery search to find where the data and metadata exists in the enterprise.
You can find physical data sources and data object relationships or you can identify the lack of documented
data object relationships. You can view the direct matches, indirect matches, and related assets from the
discovery search results.

If you perform a global search, the Analyst tool performs a text-based search for data objects, datatypes, and
folders. If you perform discovery search, in addition to the text matches, search results include objects with
relationships to the objects that match the search criteria.

For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

Enterprise Discovery
You can perform enterprise discovery in Informatica Analyst. The enterprise discovery includes column
profile and data domain discovery.

For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

Profile Results Verification


You can verify multiple inferred primary key and functional dependency results for a single data object in the
Developer tool. When you verify the profile results, the Developer tool runs the profile on all rows of the
source data. You can also verify multiple data object relationships and data domains in the enterprise
discovery results.

For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

Scorecards
You can export scorecard results to a Microsoft Excel file. The exported file contains scorecard summary,
trend charts, rows that are not valid, and scorecard properties.

For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

Support for bigint Datatype


You can run a profile on a data source with a large number of rows, such as many billions of rows. The
profiling warehouse uses the bigint column to handle large volumes of source data.

For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

Informatica Data Quality


This section describes new features and enhancements to Informatica Data Quality.

Accelerators
The set of Informatica accelerators has the following additions:

• Informatica Data Quality Accelerator for Spain. Contains rules, reference tables, demonstration mappings,
and demonstration data objects that solve common data quality issues in Spanish data.

Version 9.6.0 339


• Informatica Data Quality Accelerator for Data Discovery. Contains rules, reference tables, demonstration
mappings, and demonstration data objects that you can use to perform data discovery operations.
For more information, see the Informatica Data Quality 9.6.0 Accelerator Guide.

Address Validation
You can configure the following advanced properties on the Address Validator transformation:

Dual Address Priority

Determines the type of address to validate. Set the property when input address records contain more
than one type of valid address data.

Flexible Range Expansion

Imposes a practical limit on the number of suggested addresses that the transformation returns when
there are multiple valid addresses on a street. Set the property when you set the Ranges to Expand
property.

Geocode Data Type

Determines how the transformation calculates geocode data for an address. Geocodes are latitude and
longitude coordinates. Set the property to return the following types of geocode data:

• The latitude and longitude coordinates of the entrance to a building or a plot of land.
• The latitude and longitude coordinates of the geographic center of a plot of land.

The transformation can also estimate the latitude and longitude coordinates for an address. Estimated
geocodes are called interpolated geocodes.

Global Max Field Length

Determines the maximum number of characters on any line in the address. Set the property to verify that
the line length in an address does not exceed the requirements of the local mail carrier.

Ranges To Expand

Determines how the transformation returns suggested addresses for a street address that does not
specify a house number. Set the property to increase or decrease the range of suggested addresses for
the street.

Standardize Invalid Addresses


Determines if the transformation standardizes data values in an undeliverable address. Set the property
to simplify the terminology in the address record so that downstream data processes can run more
efficiently.

You can configure the following address validation process property in the Administrator tool:

SendRight Report Location

The location to which address validation writes a SendRight report and any log file that relates to the
creation of the report. Generate a SendRight report to verify that a set of New Zealand address records
meets the certification standards of New Zealand Post.

Note: You configure the Address Validator transformation to create a SendRight report file.

For more information, see the Informatica 9.6.0 Developer Transformation Guide.

Automatic Workflow Recovery


You can configure automatic recovery of aborted workflow instances due to an unexpected shutdown of the
Data Integration Service process. When you configure automatic recovery, the Data Integration Service
process recovers aborted workflow instances due to a service process shutdown when the service process
restarts.

340 Chapter 26: New Features and Enhancements (9.6.0)


For more information, see the Informatica 9.6.0 Developer Workflow Guide.

Business Glossary
Business Glossary comprises online glossaries of business terms and policies that define important
concepts within an organization. Data stewards create and publish terms that include information such as
descriptions, relationships to other terms, and associated categories. Glossaries are stored in a central
location for easy lookup by end-users.

Business Glossary is made up of glossaries, business terms, policies, and categories. A glossary is the high-
level container that stores other glossary content. A business term defines relevant concepts within the
organization, and a policy defines the business purpose that governs practises related to the term. Business
terms and policies can be associated with categories, which are descriptive classifications. You can access
Business Glossary through Informatica Analyst (the Analyst tool).

For more information, see the Informatica 9.6.0 Business Glossary Guide.

Column Profile Results


The column profile results include the sum of all values in columns with a numeric datatype.

For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

Use the TOTAL_SUM column in the following relational database views to access the profiling warehouse for
information about the sum of values in numeric columns:

• IDPV_COL_PROFILE_RESULTS
• IDPV_PROFILE_RESULTS_TRENDING

For more information, see the Informatica 9.6.0 Database View Reference.

Curation
You can curate inferred profile results in both Analyst and Developer tools. Curation is the process of
validating and managing discovered metadata of a data source so that the metadata is fit for use and
reporting. You can approve, reject, and restore datatypes. You can also approve, reject, and restore data
domains, primary keys, and foreign keys. You can hide or show rows containing rejected datatypes or data
domains. You can exclude approved datatypes, data domains, and primary keys from column profile
inference and data domain discovery inference when yo run the profile again.

For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

Use the following relational database views to access profiling warehouse for information about curated
profile results:

• IDPV_CURATED_DATATYPES
• IDPV_CURATED_DATADOMAINS
• IDPV_CURATED_PRIMARYKEYS
• IDPV_CURATED_FOREIGNKEYS

For more information, see the Informatica 9.6.0 Database View Reference.

Datatype Inference
You can infer multiple datatypes that match the inference criteria when you run a column profile. You can drill
down based on a column datatype in column profile results.

For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

Version 9.6.0 341


Use the following relational database views to access profiling warehouse for information on inferred
datatypes:

• IDPV_DATATYPES_INF_RESULTS
• IDPV_DATATYPE_FREQ_TRENDING

For more information, see the Informatica 9.6.0 Database View Reference.

Identity Index Data Persistence


You can configure a Match transformation to write the identity index data for a data source to database
tables. You can configure a Match transformation to compare a data source to the identity index data in the
database tables. The stored index data for one of the two data sources means that the identity match
mappings take less time to run.

When you configure a Match transformation to read index tables, you control the types of record that the
transformation analyzes and the types of output that the transformation generates. You can configure the
transformation to analyze all the records in the data sources or a subset of the records. You can configure
the transformation to write all records as output or a subset of the records.

For more information, see the Informatica 9.6.0 Developer Transformation Guide.

Java Transformation
In a Java transformation, you can configure an input port as a partition key, a sort key, and assign a sort
direction. The partition key and sort key are valid when you process the transformation in a mapping that
runs in a Hive environment.

For more information, see the Informatica 9.6.0 Developer Transformation Guide.

Lookup Transformation
If you cache the lookup source for a Lookup transformation, you can use a dynamic cache to update the
lookup cache based on changes to the target. The Data Integration Service updates the cache before it
passes each row to the target.

For more information, see the Informatica 9.6.0 Developer Transformation Guide.

Normalizer Transformation
The Normalizer transformation is an active transformation that transforms one source row into multiple
output rows. When a Normalizer transformation receives a row that contains repeated fields, it generates an
output row for each instance of the repeated data.

Use the Normalizer transformation when you want to organize repeated data from a relational or flat file
source before you load the data to a target.

For more information, see the Informatica 9.6.0 Developer Transformation Guide.

Performance
In the Developer tool you can enable a mapping to perform the following optimizations:

• Push a Union transformation to a relational data object.


• Push Filter, Expression, Union, Sorter, and Aggregator transformations to a Hive relational object.

For more information, see the Informatica 9.6.0 Mapping Guide.

Profile Results Verification


You can verify multiple inferred primary key and functional dependency results for a single data object in the
Developer tool. When you verify the profile results, the Developer tool runs the profile on all rows of the
source data. You can also verify multiple data object relationships and data domains in the enterprise
discovery results.

342 Chapter 26: New Features and Enhancements (9.6.0)


For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

Pushdown Optimization
The Data Integration Service can push expression, aggregator, operator, union, sorter, and filter functions to
Greenplum sources when the connection type is ODBC.

For more information, see the Informatica 9.6.0 Mapping Guide.

Rule Builder
Rule Builder is an Informatica Analyst feature that converts business rule requirements to transformation
logic. You save the business rule requirements in a rule specification. When you compile the rule
specification, the Analyst tool creates transformations that can analyze the business data according to the
requirements that you defined. The Analyst tool saves the transformations to one or more mapplets in the
Model repository.

A rule specification contains one or more IF-THEN statements. The IF-THEN statements use logical operators
to determine if the input data satisfies the conditions that you specify. You can use AND operators to link IF
statements and verify that a data value satisfies multiple conditions concurrently. You can define statements
that compare data from different inputs and test the inputs under different mathematical conditions. You can
also link statements so that the output from one statement becomes the input to another.

Rule Builder represents a link between business users and the Informatica development environment.
Business users can log in to the Analyst tool to create mapplets. Developer tool users add the mapplets to
mappings and verify that the business data conforms to the business rules.

For more information, see the Informatica 9.6.0 Rule Builder Guide.

Scorecards
You can export scorecard results to a Microsoft Excel file. The exported file contains scorecard summary,
trend charts, rows that are not valid, and scorecard properties.

For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

Sequence Generator Transformation


Effective in 9.6.0, you can use the Sequence Generator transformation to add a sequence of values to your
mappings.

For more information, see the Informatica 9.6.0 Developer Transformation Guide.

Informatica Data Services


This section describes new features and enhancements to Informatica Data Services.

Column Profile Results


The column profile results include the sum of all values in columns with a numeric datatype.

For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

Use the TOTAL_SUM column in the following relational database views to access the profiling warehouse for
information about the sum of values in numeric columns:

• IDPV_COL_PROFILE_RESULTS
• IDPV_PROFILE_RESULTS_TRENDING

For more information, see the Informatica 9.6.0 Database View Reference.

Version 9.6.0 343


Curation
You can curate inferred profile results in both Analyst and Developer tools. Curation is the process of
validating and managing discovered metadata of a data source so that the metadata is fit for use and
reporting. You can approve, reject, and restore datatypes. You can also approve, reject, and restore data
domains, primary keys, and foreign keys. You can hide or show rows containing rejected datatypes or data
domains. You can exclude approved datatypes, data domains, and primary keys from column profile
inference and data domain discovery inference when yo run the profile again.

For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

Use the following relational database views to access profiling warehouse for information about curated
profile results:

• IDPV_CURATED_DATATYPES
• IDPV_CURATED_DATADOMAINS
• IDPV_CURATED_PRIMARYKEYS
• IDPV_CURATED_FOREIGNKEYS

For more information, see the Informatica 9.6.0 Database View Reference.

Datatype Inference
You can infer multiple datatypes that match the inference criteria when you run a column profile. You can drill
down based on a column datatype in column profile results.

For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

Use the following relational database views to access profiling warehouse for information on inferred
datatypes:

• IDPV_DATATYPES_INF_RESULTS
• IDPV_DATATYPE_FREQ_TRENDING

For more information, see the Informatica 9.6.0 Database View Reference.

Data Masking Transformation


The Data Masking transformation has the following new features in this release:

• The Data Masking transformation is supported on Hadoop clusters. You can run the transformation in a
Hive environment.
• Tokenization is a masking technique in which you can provide JAR files with your own algorithm or logic
to mask string data.
• You can use the Phone masking technique to mask fields with numeric integer and numeric bigint
datatypes.

For more information, see the Informatica 9.6.0 Developer Transformation Guide.

Java Transformation
In a Java transformation, you can configure an input port as a partition key, a sort key, and assign a sort
direction. The Partition key and Sort key are valid when you process the transformation in a mapping that
runs in a Hive environment.

For more information, see the Informatica 9.6.0 Developer Transformation Guide.

344 Chapter 26: New Features and Enhancements (9.6.0)


Normalizer Transformation
The Normalizer transformation is an active transformation that transforms one source row into multiple
output rows. When a Normalizer transformation receives a row that contains repeated fields, it generates an
output row for each instance of the repeated data.

Use the Normalizer transformation when you want to organize repeated data from a relational or flat file
source before you load the data to a target.

For more information, see the Informatica 9.6.0 Developer Transformation Guide.

Performance
In the Developer tool you can enable a mapping to perform the following optimizations:

• Push a custom SQL query to a relational data object.


• Push operations such as Union, Union All, Intersect, Intersect All, Minus, Minus All, and Distinct to a
relational data object.
• Perform early selection and push queries that contain the SQL keyword LIMIT to a relational data object.
• Push a Union transformation to a relational data object.
• Push Filter, Expression, Union, Sorter, and Aggregator transformations to a Hive relational object.
For more information, see the Informatica 9.6.0 Developer User Guide, Informatica 9.6.0 SQL Data Service
Guide, and Informatica 9.6.0 Mapping Guide.

Profile Results Verification


You can verify multiple inferred primary key and functional dependency results for a single data object in the
Developer tool. When you verify the profile results, the Developer tool runs the profile on all rows of the
source data. You can also verify multiple data object relationships and data domains in the enterprise
discovery results.

For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

Pushdown Optimization for Greenplum


The Data Integration Service can push expression, aggregator, operator, union, sorter, and filter functions to
Greenplum sources when the connection type is ODBC.

For more information, see the Informatica 9.6.0 Mapping Guide.

Pushdown Optimization for SAP HANA


The Data Integration Service can push transformation logic to SAP HANA sources when the connection type
is ODBC.

For more information, see the Informatica 9.6.0 Mapping Guide.

Pushdown Optimization for Teradata


The Data Integration Service can push transformation logic to Teradata sources when the connection type is
ODBC.

For more information, see the Informatica 9.6.0 Mapping Guide.

REST Web Service Consumer Transformation


The REST Web Service Consumer transformation consumes REST web services in a mapping. The
transformation can use GET, PUT, POST, and DELETE HTTP operations.

You can create a REST Web Service Consumer transformation from a Schema object or add elements to an
empty transformation.

Version 9.6.0 345


For more information, see the Informatica 9.6.0 Developer Transformation Guide.

Scorecards
You can export scorecard results to a Microsoft Excel file. The exported file contains scorecard summary,
trend charts, rows that are not valid, and scorecard properties.

For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.

Sequence Generator Transformation


You can now use the Sequence Generator transformation to add a sequence of values to your mappings.

For more information, see the Informatica 9.6.0 Developer Transformation Guide.

Stored Procedures
You can use the SQL transformation to invoke stored procedures from a relational database. You can create
the SQL transformation in the Developer tool by importing a stored procedure. The Developer tool adds the
ports and the stored procedure call. You can manually add more stored procedure calls in the SQL
transformation. Return zero rows, one row, or result sets from the stored procedure.

For more information, see the Informatica 9.6.0 Developer Transformation Guide.

Tableau
You can query a deployed SQL data service with Tableau through the Informatica Data Services ODBC driver.

For more information, see the Informatica 9.6.0 Data Services Guide.

Web Service Consumer Transformation


The Web Service Consumer transformation has the following new features in this release:

• The external web service provider can authenticate the Integration Service using NTLMv2.
• In a Web Service Consumer transformation, you can use WSDL with one-way message pattern.
For more information, see the Informatica 9.6.0 Developer Transformation Guide.

Informatica Data Transformation


This section describes new features and enhancements to Informatica Data Transformation.

Data Processor Transformation Wizard


You can use a wizard to create a Data Processor transformation in the Developer with COBOL, ASN.1,
relational or JSON input or output.

For more information about the wizard, see the Informatica 9.6.0 Data Transformation User Guide.

Relational Input
A Data Processor transformation can transform relational input into hierarchical output.

For more information about relational input, see the Informatica 9.6.0 Data Transformation User Guide.

XMap with JSON


You create an XMap that reads or writes directly to JSON.

For more information about XMap or JSON, see the Informatica 9.6.0 Data Transformation User Guide.

XMap with Transformers


In an XMap mapping statement, you can include any user-defined transformer with the dp:transform function.
Use the XPath Editor to add the dp:transform function to the input, output, or condition fields.

346 Chapter 26: New Features and Enhancements (9.6.0)


For more information about XPath and the XPath editor, see the Informatica 9.6.0 Data Transformation User
Guide.

Informatica Developer
This section describes new features and enhancements to Informatica Developer.

Alerts
In the Developer tool, you can view connection status alerts in the Alerts view.

For more information, see the Informatica 9.6.0 Developer Tool Guide.

Functions
In the Developer tool, you can use the following functions in the transformation language:

• UUID4(). Returns a randomly generated 16-byte binary value.


• UUID_UNPARSE(binary). Takes a 16-byte binary argument and returns a 36-character string.

For more information, see the Informatica 9.6.0 Developer Transformation Language Reference.

JDBC Connectivity
You can use the Data Integration Service to read from relational database sources and write to relational
database targets through JDBC. JDBC drivers are installed with the Informatica services and the Informatica
clients. You can also download the JDBC driver that is JDBC 3.0 compliant from third party vendor websites.
You can use the JDBC driver to import database objects, such as views and tables, preview data for a
transformation, and run mappings.

For more information, see the Informatica 9.6.0 Developer Tool Guide.

Keyboard Accessibility
In the Developer tool, you can use keyboard shortcuts to work with objects and ports in the editor. You can
also use keyboard shortcuts to navigate the Transformation palette and the workbench.

For more information, see the Informatica 9.6.0 Developer Tool Guide.

Model Repository Service Refresh


In the Developer tool, you can refresh the Model Repository Service to see new and updated objects in the
Model repository.

For more information, see the Informatica 9.6.0 Developer Tool Guide.

Passphrases
In the Developer tool, you can enter a passphrase instead of a password for following connection types:

• Adabas
• DB2 for i5/OS
• DB2 for z/OS
• IMS
• Sequential
• VSAM
A valid passphrase for accessing databases and data sets on z/OS can be up to 128 characters in length. A
valid passphrase for accessing i5/OS can be up to 31 characters in length. Passphrases can contain the
following characters:

• Uppercase and lowercase letters

Version 9.6.0 347


• The numbers 0 to 9
• Spaces
• The following special characters:
’ - ; # \ , . / ! % & * ( ) _ + { } : @ | < > ?
Note: The first character is an apostrophe.
For more information, see the Informatica 9.6.0 Developer Tool Guide.

Informatica Development Platform


This section describes new features and enhancements to Informatica Development Platform.

Design API
Version 9.6.0 includes the following enhancements for the Design API:

• You can use the Design API to fetch an XML source or XML target from the PowerCenter repository.
• You can use Design API to connect to a hierarchical VSAM data source or target through PowerExchange.
• You can use the Design API to perform repository functions in a domain that uses Kerberos
authentication. You can enable Kerberos authentication through the [Link] file or when you
create a Repository object.
For more information, see the Informatica Development Platform 9.6.0 Developer Guide.

Informatica Connector Toolkit


You can use the Informatica Connector Toolkit to build an adapter to provide connectivity between a data
source and the Informatica platform. The Informatica Connector Toolkit consists of libraries, plug-ins, and
sample codes to develop an adapter in an Eclipse environment.

For more information, see the Informatica Development Platform 9.6.0 Informatica Connector Toolkit
Developer Guide.

Informatica Domain
This section describes new features and enhancements to the Informatica domain.

Analyst Service
Version 9.6.0 includes the following enhancements to the Analyst Service:

• You can select a Data Integration Service configured to run Human tasks. If the Data Integration Service
associated with the Analyst Service is not configured to run Human tasks, choose a different Data
Integration Service.
• You can select a Search Service to enable searches in the Analyst tool.
• You can set the location of the export file directory to export a business glossary.
For more information, see the Informatica 9.6.0 Application Service Guide.

Content Management Service


You can set the location of the SendRight report file on the Content Management Service. Generate a
SendRight report when you run an address validation mapping in certified mode on New Zealand address
records. The report verifies that the address records meet the certification standards of New Zealand Post.

For more information, see the Informatica 9.6.0 Application Service Guide.

348 Chapter 26: New Features and Enhancements (9.6.0)


The Content Management Service manages the compilation of rule specifications into mapplets. When you
compile a rule specification in the Analyst tool, the Analyst Service selects a Content Management Service to
generate the mapplet. The Analyst tool uses the Model Repository Service configuration to select the Content
Management Service.

For more information, see the Informatica 9.6.1 Application Service Guide.

High Availability
Version 9.6.0 includes the following enhancements to high availability for services:

• When the Model Repository Service becomes unavailable, the Service Manager can restart the service on
the same node or a backup node. You can configure the Model Repository Service to run on one or more
backup nodes.
• When the Data Integration Service becomes unavailable, the Service Manager can restart the service on
the same node or a backup node. You can configure the Data Integration Service to run on one or more
backup nodes.
• When the Data Integration Service fails over or restarts unexpectedly, you can enable automatic recovery
of aborted workflows.
• You can enable the PowerCenter Integration Service to store high availability persistence information in
database tables. The PowerCenter Integration Service stores the information in the associated repository
database.
For more information, see the Informatica 9.6.0 Administrator Guide.

Log Management
You can aggregate logs at the domain level or service level based on scenarios with the Administrator tool.
You can also compress the log files that you aggregate to save disk space.

For more information, see the Informatica 9.6.0 Administrator Guide.

Passphrases
You can enter a passphrase instead of a password at the following locations:

• In the -ConnectionPassword option of the infacmd isp CreateConnection and UpdateConnection


commands for ADABAS, DB2I, DB2Z, IMS, SEQ, or VSAM connections.
• In the -pwxPassword option of the infacmd pwx createdatamaps command for IMS, SEQ, and VSAM data
sources.
• In the Administrator tool, for DB2 for i5/OS and DB2 for z/OS connections.
A valid passphrase for accessing databases and data sets on z/OS can be up to 128 characters in length. A
valid passphrase for accessing i5/OS can be up to 31 characters in length. Passphrases can contain the
following characters:

• Uppercase and lowercase letters


• The numbers 0 to 9
• Spaces
• The following special characters:
’ - ; # \ , . / ! % & * ( ) _ + { } : @ | < > ?
Note: The first character is an apostrophe.
For more information, see the Informatica 9.6.0 Administrator Guide and Informatica 9.6.0 Command
Reference.

Search Service
Create a Search Service to enable search in the Analyst tool and Business Glossary Desktop.

Version 9.6.0 349


For more information, see the Informatica 9.6.0 Application Service Guide.

Workflow Graph
You can view the graphical representation of a workflow that you run in the Administrator tool. You can view
the details of the tasks within the workflow and the failure points.

For more information, see the Informatica 9.6.0 Administrator Guide.

Informatica Domain Security


This section describes security enhancements to the Informatica domain.

Authentication
You can run Informatica with Kerberos authentication and Microsoft Active Directory (AD) directory service.
Kerberos authentication provides single sign-on capability to Informatica domain client applications. The
Informatica domain supports Active Directory 2008 R2.

Two-Factor Authentication (TFA)


Informatica clients can run on a Windows network that uses two-factor authentication.

Encryption Key
You can specify a keyword to generate a unique encryption key for encrypting sensitive data such as
passwords that are stored in the domain.

Workflow Security
You can configure the PowerCenter Integration Service to run PowerCenter workflows securely. The Enable
Data Encryption option enables secure communication between the PowerCenter Integration Service and the
Data Transformation Manager (DTM) process and between DTM processes.

Administrator Group
The Informatica domain includes an Administrator group with default administrator privileges. You can add
users to or remove users from the Administrator group. You cannot delete the Administrator group.

Administrator Account Lockout


When you configure account lockout in the Administrator tool, you can enforce account lockout for
administrator user accounts. The Admin Account Lockout option enables lockout for administrator user
accounts. When you enable the Account Lockout option, you can also enable the Admin Account Lockout
option.

Connection to Secure Relational Databases


You can use the Informatica relational database drivers to connect to a secure Oracle, Microsoft SQL Server,
or IBM DB2 database. You can create repositories, sources, and targets on databases secured with SSL
certificates.

Audit Reports
In the Administrator tool, you can generate audit reports to get information on users and groups in the
Informatica domain. For example, you can get information about a user account, such as the privileges and
permissions assigned to the user and the groups associated with the user.

350 Chapter 26: New Features and Enhancements (9.6.0)


Analyst Service Privileges
The following table describes new privileges for the Analyst Service:

Privilege Description

Manage Glossaries User is able to manage business glossaries.

Workspace Access User is able to access the following workspaces in the Analyst tool:
- Design workspace.
- Discovery workspace.
- Glossary workspace.
- Scorecards workspace.

Design Workspace User is able to access the Design workspace.

Discovery Workspace User is able to access the Discovery workspace.

Glossary Workspace User is able to access the Glossary workspace.

Scorecards Workspace User is able to access the Scorecards workspace.

Model Repository Service Privileges


The following table describes new privileges for the Model Repository Service:

Privilege Description

Access Analyst User is able to access the Model repository from the Analyst tool.

Access Developer User is able to access the Model repository from the Developer tool.

For more information, see the Informatica 9.6.0 Security Guide.

Version 9.6.0 351


Command Line Programs
This section describes new and changed commands and options for the Informatica command line
programs.

infacmd as Commands
The following table describes an updated infacmd as command:

Command Description

CreateService Contains the following new options:


- -HumanTaskDataIntegrationService(-htds). Optional. Name of the Data Integration Service
that runs Human tasks.
- -BusinessGlossaryExportFileDirectory(-bgefd). Optional. Location of the directory to export
business glossary files.
Contains the following obsolete option:
- -StagingDatabase(-sd). Required. Database connection name for a staging database

UpdateServiceOptions Updates Analyst Service options. In version 9.6.0 you can run the command to specify a Data
Integration Service to run Human tasks.
For example, the following command configures the Analyst Service to specify DIS_ID_100 as
the Data Integration Service name:
infacmd as UpdateServiceOptions
-dn InfaDomain -sn AS_ID_100
-un Username -pd Password
[Link]=DS_ID_100

The following table describes obsolete infacmd as commands:

Command Description

CreateAuditTables Creates audit tables that contain audit trail log events for bad record tables and duplicate tables
in a staging database.
Update any script that uses infacmd as CreateAuditTables.

DeleteAuditTables Creates audit tables that contain audit trail log events for bad record tables and duplicate tables
in a staging database.
Update any script that uses infacmd as DeleteAuditTables.

infacmd dis Commands


The following table describes updated infacmd dis commands:

Command Description

CreateService Contains the following new option:


- -BackupNodes(-bn). Optional. Name of the backup nodes.

UpdateService Contains the following new option:


- -BackupNodes(-bn). Optional. Name of the backup nodes.

352 Chapter 26: New Features and Enhancements (9.6.0)


infacmd idd Commands
The infacmd idd commands are obsolete. Update any script that refers to an infacmd idd command.

The following table describes the obsolete infacmd idd commands:

Command Description

CreateService Creates a Data Director Service.

ListServiceOptions Lists the Data Director Service options.

ListServiceProcessOptions Lists the Data Director Service process options.

RemoveService Removes the Data Director Service.

UpdateServiceOptions Updates the Data Director Service options.

UpdateServiceProcessOptions Updates the Data Director Service process options.

infacmd isp Commands


The following table describes updated infacmd isp commands:

Command Description

AssignISToMMService Contains the following new option:


- -RepositoryUserSecurityDomain(-rsdn).Optional. Name of the security domain to which
the PowerCenter repository user belongs.

CreateConnection Contains the following updated option:


- -ConnectionPassword. You can enter a passphrase for ADABAS, DB2I, DB2Z, IMS, SEQ,
or VSAM connections. A passphrase can be up to 128 characters in length for z/OS
connections and up to 31 characters in length for DB2 for i5/OS connections. A
passphrase can contain letters, numbers, spaces, and some special characters.

CreateIntegrationService Contains the following service option (-so):


- StoreHAPersistenceInDB. Optional. Stores process state information in high availability
persistence tables in the associated PowerCenter repository database. Default is no.

EnableService Can enable the Search Service.

GetLog Contains the argument SEARCH for the ServiceType option. Use the argument to get the
log events for the Search Service.

ListServices Contains the argument SEARCH for the ServiceType option. Use the argument to get a list
of all Search Services running in the domain.

UpdateConnection Contains the following updated option:


- -ConnectionPassword. You can enter a passphrase for ADABAS, DB2I, DB2Z, IMS, SEQ,
or VSAM connections. A passphrase can be up to 128 characters in length for z/OS
connections and up to 31 characters in length for DB2 for i5/OS connections. A
passphrase can contain letters, numbers, spaces, and some special characters.

UpdateDomainOptions Contains the following domain option (-do):


- ServiceResilTimeout. Amount of time in seconds that a service tries to establish or
reestablish a connection to another service.

Version 9.6.0 353


Command Description

UpdateGatewayInfo Contains the following new option:


- -Force(-f). Optional. Updates or creates the [Link] file even when the connection
to the domain fails. The ‑Force option sets the Kerberos and TLS enabled options as
false in the [Link] file if the connection to domain fails. If you do not specify the
‑Force option, the command does not update the [Link] file if the connection to
the domain fails. Previously, the command could not check for any error message when
updating the gateway node with the connectivity information that you specified.

UpdateIntegrationService Contains the following service option (-so):


- StoreHAPersistenceInDB. Optional. Stores process state information in high availability
persistence tables in the associated PowerCenter repository database. Default is no.

infacmd mrs Commands


The following table describes updated infacmd mrs commands:

Command Description

CreateService Contains the following new option:


- -BackupNodes(-bn). Optional. Name of the backup nodes.

UpdateService Contains the following new option:


- -PrimaryNode(-nn). Optional. Name of the primary node.
- -BackupNodes(-bn). Optional. Name of the backup nodes.

infacmd ps Commands
The following table describes new infacmd ps commands:

Command Description

migrateProfileResults Migrates column profile results and data domain discovery results from versions 9.1.0, 9.5.0,
or 9.5.1.

synchronizeProfile Migrates documented keys, user-defined keys, committed keys, primary keys, and foreign
keys for all the profiles in a specific project from versions 9.1.0, 9.5.0, or 9.5.1.

infacmd pwx Commands


The following table describes a new infacmd pwx command:

Command Description

createdatamaps Creates PowerExchange data maps for IMS, SEQ, or VSAM data sources for bulk data movement.

354 Chapter 26: New Features and Enhancements (9.6.0)


infacmd search Commands
The following table describes the new infacmd search commands:

Command Description

createService Creates a Search Service.

listServiceOptions Lists the properties for a Search Service.

listServiceProcessOptions Lists the properties for a Search Service process.

updateServiceOptions Configures properties for a Search Service.

updateServiceProcessOptions Configures properties for a Search Service process.

For more information, see the Informatica 9.6.0 Command Reference.

PowerCenter
This section describes new features and enhancements to PowerCenter.

Pushdown Optimization for SAP HANA


The PowerCenter Integration Service can push transformation logic to SAP HANA sources and targets when
the connection type is ODBC.

For more information, see the Informatica PowerCenter 9.6.0 Advanced Workflow Guide.

High Availability Persistence in a Database


You can enable the PowerCenter Integration Service to store high availability persistence information in
database tables. The PowerCenter Integration Service stores the information in the associated repository
database.

For more information, see the Informatica 9.6.0 Administrator Guide.

Transformations
You can use a parameter file to provide cache size values in the following transformations:

• Aggregator
• Joiner
• Rank
• Sorter

For more information, see the Informatica PowerCenter 9.6.1 Transformation Guide.

PowerCenter Big Data Edition


This section describes new features and enhancements to PowerCenter Big Data Edition.

Automatic Workflow Recovery


You can configure automatic recovery of aborted workflow instances due to an unexpected shutdown of the
Data Integration Service process. When you configure automatic recovery, the Data Integration Service
process recovers aborted workflow instances due to a service process shutdown when the service process
restarts.

Version 9.6.0 355


For more information, see the Informatica 9.6.0 Developer Workflow Guide.

Mappings in the Hive Environment


• You can run mappings with Cloudera 4.2, Hortonworks 1.3.2, MapR 2.1.3, and MapR 3.0.1 distributions.
• When you choose Hive as the validation environment for the mapping, you can now choose a Hive version.
• You can append to a Hive target table with Hive version 0.9 and later.
• In a Java transformation, you can configure an input port as a partition key, a sort key, and assign a sort
direction to get sorted output data.
• To modify the Hadoop distribution directory on the Hadoop data nodes and the Data Integration Service
node use the Hadoop resource descriptor configuration file [Link].

For more information, see the Informatica PowerCenter Big Data Edition 9.6.0 User Guide.

Partitioned Mappings in the Native Environment


If you have the Partitioning option, you can enable the Data Integration Service process to maximize
parallelism when it runs mappings in the native environment. The Data Integration Service process must run
on a node that has multiple CPUs. When you maximize parallelism, the Data Integration Service dynamically
divides the underlying data into partitions and processes all of the partitions concurrently. When the Data
Integration Service adds partitions, it increases the number of processing threads, which can increase
mapping performance.

For more information, see the Informatica 9.6.0 Mapping Guide.

PowerCenter Advanced Edition


This section describes new features and enhancements to PowerCenter Advanced Edition.

Business Glossary
Business Glossary comprises online glossaries of business terms and policies that define important
concepts within an organization. Data stewards create and publish terms that include information such as
descriptions, relationships to other terms, and associated categories. Glossaries are stored in a central
location for easy lookup by end-users.

Business Glossary is made up of glossaries, business terms, policies, and categories. A glossary is the high-
level container that stores other glossary content. A business term defines relevant concepts within the
organization, and a policy defines the business purpose that governs practises related to the term. Business
terms and policies can be associated with categories, which are descriptive classifications. You can access
Business Glossary through Informatica Analyst (the Analyst tool).

For more information, see the Informatica 9.6.0 Business Glossary Guide.

Metadata Manager
This section describes new features and enhancements to Metadata Manager.

Security Enhancements
Metadata Manager contains the following security enhancements:

Connection to secure relational databases

Metadata Manager can communicate with secure IBM DB2, Microsoft SQL Server, and Oracle databases.
Metadata Manager can communicate with these databases when they are used for the Metadata
Manager repository, for the PowerCenter repository, or as metadata sources.

356 Chapter 26: New Features and Enhancements (9.6.0)


For more information, see the Informatica PowerCenter 9.6.0 Metadata Manager Administrator Guide.

Kerberos authentication

Metadata Manager can run on a domain that is configured with Kerberos authentication.

For information about configuring the domain to use Kerberos authentication, see the Informatica 9.6.0
Security Guide. For information about running Metadata Manager and mmcmd when the domain uses
Kerberos authentication, see the Informatica PowerCenter 9.6.0 Metadata Manager Administrator Guide.

Two-factor authentication

Metadata Manager can run on a Windows network that uses two factor authentication.

For more information, see the Informatica 9.6.0 Security Guide.

Business Glossary Resources


You can create Business Glossary resources that are based on Informatica Analyst business glossaries.
Create a Business Glossary resource to extract metadata from an Informatica Analyst business glossary.

For information about creating resources, see the Informatica PowerCenter 9.6.0 Metadata Manager
Administrator Guide. For information about viewing resources, see the Informatica PowerCenter 9.6.0
Metadata Manager User Guide.

Resource Versions
You can create resources of the following versions:

• Microstrategy 9.3.1 and 9.4.1. Previously, you could create Microstrategy resources up to version 9.2.1.
• Netezza 7.0. Previously, you could create Netezza resources up to version 6.0.
For information about creating resources, see the Informatica PowerCenter 9.6.0 Metadata Manager
Administrator Guide.

Browser Support
You can run the Metadata Manager application in the Google Chrome web browser.

PowerExchange Adapters for PowerCenter


This section describes new features and enhancements to PowerExchange adapters for PowerCenter.

PowerExchange for Greenplum

You can configure a session to override the schema that is specified in the Greenplum connection
object.

For more information, see the Informatica PowerExchange for Greenplum 9.6.0 User Guide for
PowerCenter.

PowerExchange for Hadoop

PowerExchange for Hadoop supports following updated versions of Hadoop distributions to access
Hadoop sources and targets:

• Cloudera CDH 4.2


• Hortonworks 1.3.2
• MapR 2.1.3 and 3.0.1
• Pivotal HD 1.1
• IBM BigInsights-2.1

For more information, see the Informatica PowerExchange for Hadoop 9.6.0 User Guide for PowerCenter.

Version 9.6.0 357


PowerExchange for Microsoft Dynamics CRM

• You can use Microsoft Dynamics CRM Online version 2013 for online deployment.
• You can configure the number of rows that you want to retrieve from Microsoft Dynamics CRM.
• You can join two related entities that have one to many or many to one relationships.
• PowerExchange for Microsoft Dynamics CRM uses HTTP compression to extract data if HTTP
compression is enabled in the Internet Information Services (IIS) where Microsoft Dynamics CRM is
installed.
• You can configure the PowerCenter Integration Service to write records in bulk mode.
• You can change the location of the [Link] file and the [Link] files at run time.

For more information, see the Informatica PowerExchange for Microsoft Dynamics CRM 9.6.0 User Guide
for PowerCenter.

PowerExchange for SAP NetWeaver

• PowerExchange for SAP NetWeaver uses SAP NetWeaver RFC SDK 7.20 libraries.
• You can enable partitioning for SAP BW sessions that load data to 7.x DataSources. When you enable
partitioning, the PowerCenter Integration Service performs the extract, transform, and load for each
partition in parallel.
• You can run ABAP stream mode sessions with the Remote Function Call communication protocol.
• You can install secure transports to enforce security authorizations when you use ABAP to read data
from SAP.
• When you extract business content data from SAP Business Suite applications, you can use data
sources that belong to a custom namespace.
• When you use timestamp-based delta pointers to extract business content data, you can extract the
changed data alone without doing a full transfer of the entire data.

For more information, see the Informatica PowerExchange for SAP User Guide for PowerCenter.

PowerExchange for SAS

You can read data directly from a SAS data file.

For more information, see the Informatica PowerExchange for SAS 9.6.0 User Guide for PowerCenter.

PowerExchange for Siebel

When you import Siebel business components, you can specify the name of the Siebel repository if
multiple Siebel repositories are available. You can create and configure the [Link] file
to add the Repository Name field to the Import from Siebel wizard in PowerExchange for Siebel.

For more information, see the Informatica PowerExchange for Siebel 9.6.0 User Guide for PowerCenter.

PowerExchange for Teradata Parallel Transporter API

• You can configure a session so that Teradata PT API uses one of the spool modes to extract data
from Teradata.
• You can configure a session to use a character in place of an unsupported Teradata unicode
character while loading data to targets.

For more information, see the Informatica PowerExchange for Teradata Parallel Transporter API 9.6.0
User Guide for PowerCenter.

358 Chapter 26: New Features and Enhancements (9.6.0)


PowerExchange for Web Services

• The PowerCenter Integration Service can process SOAP 1.2 messages with RPC/encoded and
document/literal encoding styles. Each web service can have an operation that uses a SOAP 1.2
binding. You can create a Web Service Consumer transformation with a SOAP 1.2 binding.
• You can use PowerExchange for Web Services with SharePoint 2010 and 2013 as a web service
provider.

For more information, see the Informatica PowerExchange for Web Services 9.6.0 User Guide for
PowerCenter.

PowerExchange Adapters for Informatica


This section describes new features and enhancements to PowerExchange adapters for Informatica.

PowerExchane for HBase

PowerExchange for HBase provides connectivity to an HBase data store. Use PowerExchange for HBase
to read data from the HBase columns families or write data to the columns families in an HBase table.
You can read or write data to a column family or a single binary column.

You can add an HBase data object operation as a source or as a target in a mapping and run the
mappings in the native or a Hive environment.

For more information, see the PowerExchange for HBase 9.6.0 User Guide.

PowerExchange for DataSift

You can configure the HTTP proxy server authentication settings at design time.

For more information, see the Informatica PowerExchange for DataSift 9.6.0 User Guide.

PowerExchange for Facebook

• You can extract information about a group, news feed of a group, list of members in a group, basic
information about a page, and news feed from a page from Facebook.
• You can configure the HTTP proxy server authentication settings at design time.

For more information, see the Informatica PowerExchange for Facebook 9.6.0 User Guide.

PowerExchange for HDFS

• PowerExchange for HDFS supports the following Hadoop distributions to access HDFS sources and
targets:
- CDH Version 4 Update 2

- HortonWorks 1.3.2

- MapR 2.1.3

- MapR 3.0.1
• You can write text files and binary file formats, such as sequence files, to HDFS with a complex file
data object.
• You can write compressed complex files, specify compression formats, and decompress files.
• The Data Integration Service creates partitions to read data from sequence files and custom input
format files that can be split.

For more information, see the Informatica PowerExchange for HDFS 9.6.0 User Guide.

Version 9.6.0 359


PowerExchange for Hive

• PowerExchange for Hive supports the following Hive distributions to access Hive sources and
targets:
- Cloudera CDH Version 4 Update 2

- HortonWorks 1.3.2

- MapR 2.1.3

- MapR 3.0.1
• You can write to Hive partitioned tables when you run mappings in a Hive environment.

PowerExchange for LinkedIn

• You can specify the full name of a person when you look up company information in LinkedIn.
• You can configure the HTTP proxy server authentication settings at design time.

For more information, see the Informatica PowerExchange for LinkedIn 9.6.0 User Guide.

PowerExchange for Salesforce

• You can select specific records from Salesforce by using the filter from the query property of the
Salesforce data object read operation.
• You can use a Salesforce data object read operation to look up data in a Salesforce object.
• You can configure the HTTP proxy server authentication settings at design time.

For more information, see the Informatica PowerExchange for Salesforce 9.6.0 User Guide.

PowerExchange for SAP NetWeaver

• PowerExchange for SAP NetWeaver uses SAP NetWeaver RFC SDK 7.20 libraries.
• You can install secure transports to enforce security authorizations when you use ABAP to read data
from SAP.

For more information, see the Informatica PowerExchange for SAP 9.6.0 User Guide.

PowerExchange for Twitter

• You can specify a list of user IDs or screen names in a .txt or .csv format to extract the profiles of
users. You can specify a valid user ID or a screen name to extract the profile of a user.
• You can configure the HTTP proxy server authentication settings at design time.
For more information, see the Informatica PowerExchange for Twitter 9.6.0 User Guide.

PowerExchange for Web Content-Kapow Katalyst

You can configure the HTTP proxy server authentication settings at design time.

For more information, see the Informatica PowerExchange for LinkedIn 9.6.0 User Guide.

Informatica Documentation
This section describes new guides included with the Informatica documentation. Some new guides are
organized based on shared functionality among multiple products and replace previous guides.

The Informatica documentation contains the following new guides:


Informatica Analyst Tool Guide

Contains general information about Informatica Analyst (the Analyst tool). Previously, the Analyst tool
was documented in the Informatica Data Integration Analyst User Guide.

360 Chapter 26: New Features and Enhancements (9.6.0)


Informatica Application Service Guide

Contains information about application services. Previously, the application services were documented
in the Informatica Administrator Guide.

Informatica Connector Toolkit Developer Guide

Contains information about the Informatica Connector Toolkit and how to develop an adapter for the
Informatica platform. You can find information on components that you define to develop an adapter
such as connection attributes, type system, metadata objects, and run-time behavior.

Informatica Connector Toolkit Getting Started Guide

Contains a tutorial on how to use the Informatica Connector Toolkit to develop a sample MySQL adapter
for the Informatica platform. You can find information on how to install Informatica Connector Toolkit
and on how to create and publish a sample MySQL adapter with the Informatica Connector Toolkit.

Informatica Data Explorer Data Discovery Guide

Contains information about discovering the metadata of source systems that include content and
structure. You can find information on column profiles, data domain discovery, primary key and foreign
key discovery, functional dependency discovery, Join analysis, and enterprise discovery. Previously, data
discovery was documented in the Informatica Data Explorer User Guide.

Informatica Business Glossary Guide

Contains information about Business Glossary. You can find information about how to manage and look
up glossary content in the Analyst Tool. Glossary content includes terms, policies, and categories.
Previously, information about Metadata Manager Business Glossary was documented in the Informatica
PowerCenter Metadata Manager Business Glossary Guide.

Informatica Data Quality Exception Management Guide

Contains information about exception management for Data Quality. You can find information about
managing exception record tasks in the Analyst tool. Previously, exception management was
documented in the Informatica Data Director for Data Quality Guide, Data Quality User Guide, and Data
Services User Guide.

Informatica Database View Reference

Contains information about Model Repository views, Profile Warehouse views, and Business Glossary
views. Previously, this book was called the Informatica Data Services Model Repository Views and the
profile views were documented in an H2L article. The Business Glossary views is the new content added
in this book.

Informatica Developer Tool Guide

Contains information about Informatica Developer. You can find information on common functionality in
the Developer tool. Previously, the Developer tool was documented in the Informatica Developer User
Guide.

Informatica Mapping Guide

Contains information about configuring Model repository mappings. Previously, the mapping
configuration was documented in the Informatica Developer User Guide.

Informatica Mapping Specifications Getting Started Guide

Contains getting started information for mapping specifications.

Informatica Mapping Specifications Guide

Contains information about mapping specifications. Previously, the mapping specifications were
documented in the Informatica Data Integration Analyst User Guide.

Version 9.6.0 361


Informatica Profile Guide

Contains information about profiles. The guide contains basic information about running column
profiles, creating rules, and creating scorecards. Previously, profiling was documented in the Data Quality
User Guide and Informatica Data Explorer User Guide.

Informatica Reference Data Guide

Contains information about reference data objects. A reference data object contains a set of data values
that you can use to perform search operations in source data. You can create reference data objects in
the Developer tool and Analyst tool, and you can import reference data objects to the Model repository.
Previously, reference data objects were documented in the Informatica Data Quality User Guide.

Informatica Rule Builder Guide

Contains information about the Rule Builder feature in the Analyst tool. Use Rule Builder to describe
business rule requirements as a series of logical statements. You compile the logical statements into a
rule specification. The Analyst tool saves a copy of the rule specification as a mapplet in the Model
repository.

Informatica Security Guide

Contains information about security for the Informatica domain. Previously, Informatica security was
documented in the Informatica Administrator Guide.

Informatica SQL Data Service Guide

This manual contains information about creating SQL data services, populating virtual data and
connecting to an SQL data service with third party tools. Previously, this book was called the Informatica
Data Services User Guide.

362 Chapter 26: New Features and Enhancements (9.6.0)


Chapter 27

Changes to Informatica Data


Explorer (9.6.0)
This chapter includes the following topics:

• Enterprise Discovery, 363


• Profile Results Verification, 363
• Rules, 364
• Scorecards, 364

Enterprise Discovery
Effective in version 9.6.0, enterprise discovery includes the following changes:

• You can refresh the Model Repository Service to view the enterprise discovery results for data sources
from external connections.
Previously, after you ran an enterprise discovery profile, you had to reconnect to the Model Repository
Service.
• The Profile Model option in the profile wizard that you open by selecting File > New > Profile is renamed
to Enterprise Discovery Profile.
• The graphical view of the enterprise discovery results displays the data domains overlap in entities for
those data domains that you choose to include in the graphical view.

Profile Results Verification


Effective in version 9.6.0, you can verify the data domain discovery results on multiple columns in the
Developer tool. When you verify the profile results, the Developer tool runs the profile on all rows of the data
source.

Previously, you verified the data domain discovery results for a single column.

363
Rules
Effective in version 9.6.0, you can select multiple input columns when you apply a rule to a profile in
Informatica Analyst.

Previously, you selected one input column when you applied a rule.

Scorecards
Effective in version 9.6.0, scorecards include the following changes:

• When you select the valid values for a metric, you can view the percentage of selected valid values and
count of total valid values.
Previously, you could view the count of total valid values in the column.
• When you view the source data for a metric, by default, the Drilldown section displays the rows of source
data that are not valid.
Previously, the default value was to display rows that are valid.
• In the scorecard results, you can select a score and click the trend chart arrow to view the trend chart.
Previously, you right-clicked the score and selected the Show Trend Chart option.

364 Chapter 27: Changes to Informatica Data Explorer (9.6.0)


Chapter 28

Changes to Informatica Data


Quality (9.6.0)
This chapter includes the following topics:

• Address Validator Transformation, 365


• Exception Record Management, 365
• Informatica Data Director for Data Quality, 366
• Java Transformation, 366
• Mapping Parameters, 366
• Match Transformation, 367
• Native Connectivity to Microsoft SQL Server, 367
• Port-to-Port Data Conversion, 367
• Profile Results Verification, 367
• Reference Tables, 368
• Rules, 368
• Scorecards, 368

Address Validator Transformation


Effective in version 9.6.0, the Address Validator transformation uses version 5.4.1 of the Address Doctor
software engine.

Previously, the transformation used version 5.3.1 of the Address Doctor software engine.

Exception Record Management


Effective in version 9.6.0, the Analyst tool reads exception records from the database tables that a Human
task identifies.

Previously, the Analyst tool read exception records from a staging database that the Analyst Service
identified.

365
To continue to analyze the records in the staging database after you upgrade, perform the following steps:

1. Create a mapping that reads the staging database tables.


Use an Exception transformation to identify the exception records.
2. Configure a workflow with a Mapping task and a Human task.
Configure the Mapping task to run the exception mapping. Configure the Human task to read the output
of the Mapping task.
3. Run the workflow.
4. Log in to the Analyst tool to review and update the exception records.

Informatica Data Director for Data Quality


Effective in version 9.6.0, the Informatica Data Director for Data Quality web application is obsolete. To
review and update Human task data in version 9.6.0, log in to the Analyst tool.

Previously, users logged in to Informatica Data Director for Data Quality to review and update the records that
a Human task specified.

Java Transformation
Effective in version 9.6.0, the Stateless advanced property for the Java transformation is valid in both the
native and Hive environments. In the native environment, Java transformations must have the Stateless
property enabled so that the Data Integration Service can use multiple partitions to process the mapping.

Previously, the Stateless property was valid only in the Hive environment. The Data Integration Service
ignored the Stateless property when a mapping ran in the native environment.

Mapping Parameters
Effective in version 9.6.0, the user-defined parameter that represents a long value is named Bigint. Previously,
this user-defined parameter was named Long.

Effective in version 9.6.0, parameter names that are defined in reusable transformations, relational,
PowerExchange, and flat file data objects, and that begin with the dollar sign ($) are renamed to a unique
name in the Model repository. However, the parameter name is not changed in the parameter file. Previously,
you could use the dollar sign ($) as the first character in mapping parameter names.

366 Chapter 28: Changes to Informatica Data Quality (9.6.0)


Match Transformation
Effective in version 9.6.0, a Match transformation that performs identity match analysis treats null data
values and empty data fields differently. Identity match analysis and field match analysis treat null data
values and empty data fields in the same manner in version 9.6.0.

Previously, a Match transformation treated null data values and empty data fields as identical data elements
in identity match analysis.

Native Connectivity to Microsoft SQL Server


Effective in version 9.6.0, you must install the Microsoft SQL Server 2012 Native Client to configure native
connectivity to Microsoft SQL Server databases from Windows machines.

Previously, you did not have to install an SQL client because Informatica used the Microsoft OLE DB provider
for native connectivity.

If you upgrade from an earlier version, you must install the Microsoft SQL Server 2012 Native Client for the
existing mappings to work.

Port-to-Port Data Conversion


Effective in version 9.6.0, the Data Integration Service uses the conversion functions in the transformation
language to perform port-to-port conversions between transformations. The Data Integration Service
performs port-to-port conversions when you pass data between ports with different datatypes. If the data
that you pass is not valid for the conversion datatype, a transformation row error occurs.

Previously, the Data Integration Service did not use the transformation functions for port-to-port conversions.
The Data Integration Service used a separate algorithm. If the data that you passed contained data that was
not valid for the conversion datatype, the Data Integration Service dropped the value and used a substitute
value.

Upgraded mappings that use port-to-port data conversion might produce different output data. For example,
a mapping in a previous version produced the following output:
"0.377777","0.527777","0.000000","0.250000","0.000000","0.377777","0.250000"
After you upgrade, the same mapping might produce the following output:
"0.377777","0.527777","0","0.25","0","0.377777","0.25"

Profile Results Verification


Effective in version 9.6.0, you can verify the data domain discovery results on multiple columns in the
Developer tool. When you verify the profile results, the Developer tool runs the profile on all rows of the data
source.

Previously, you verified the data domain discovery results for a single column.

Match Transformation 367


Reference Tables
The following changes apply to reference tables in version 9.6.0:

• Effective in version 9.6.0, you can use wildcards when you search a reference table for data values in the
Developer tool. When you search a reference table for data values, the search is not case-sensitive in the
Developer tool.
Previously, you performed wildcard searches and searches that are not case-sensitive in the Analyst tool.
• Effective in version 9.6.0, the Data Integration Service stores a single instance of a reference table in
memory when multiple mappings in a process read the reference table.
Previously, the Data Integration Service stored an instance of the reference table in memory for each
mapping.

Rules
Effective in version 9.6.0, you can select multiple input columns when you apply a rule to a profile in
Informatica Analyst.

Previously, you selected one input column when you applied a rule.

Scorecards
Effective in version 9.6.0, scorecards include the following changes:

• When you select the valid values for a metric, you can view the percentage of selected valid values and
count of total valid values.
Previously, you could view the count of total valid values in the column.
• When you view the source data for a metric, by default, the Drilldown section displays the rows of source
data that are not valid.
Previously, the default value was to display rows that are valid.
• In the scorecard results, you can select a score and click the trend chart arrow to view the trend chart.
Previously, you right-clicked the score and selected the Show Trend Chart option.

368 Chapter 28: Changes to Informatica Data Quality (9.6.0)


Chapter 29

Changes to Informatica Data


Services (9.6.0)
This chapter includes the following topics:

• Java Transformation, 369


• Native Connectivity to Microsoft SQL Server, 369
• Port-to-Port Data Conversion, 370
• Profile Results Verification, 370
• Rules, 370
• Scorecards, 370

Java Transformation
Effective in version 9.6.0, the Stateless advanced property for the Java transformation is valid in both the
native and Hive environments. In the native environment, Java transformations must have the Stateless
property enabled so that the Data Integration Service can use multiple partitions to process the mapping.

Previously, the Stateless property was valid only in the Hive environment. The Data Integration Service
ignored the Stateless property when a mapping ran in the native environment.

Native Connectivity to Microsoft SQL Server


Effective in version 9.6.0, you must install the Microsoft SQL Server 2012 Native Client to configure native
connectivity to Microsoft SQL Server databases from Windows machines.

Previously, you did not have to install an SQL client because Informatica used the Microsoft OLE DB provider
for native connectivity.

If you upgrade from an earlier version, you must install the Microsoft SQL Server 2012 Native Client for the
existing mappings to work.

369
Port-to-Port Data Conversion
Effective in version 9.6.0, the Data Integration Service uses the conversion functions in the transformation
language to perform port-to-port conversions between transformations. The Data Integration Service
performs port-to-port conversions when you pass data between ports with different datatypes. If the data
that you pass is not valid for the conversion datatype, a transformation row error occurs.

Previously, the Data Integration Service did not use the transformation functions for port-to-port conversions.
The Data Integration Service used a separate algorithm. If the data that you passed contained data that was
not valid for the conversion datatype, the Data Integration Service dropped the value and used a substitute
value.

Upgraded mappings that use port-to-port data conversion might produce different output data. For example,
a mapping in a previous version produced the following output:
"0.377777","0.527777","0.000000","0.250000","0.000000","0.377777","0.250000"
After you upgrade, the same mapping might produce the following output:
"0.377777","0.527777","0","0.25","0","0.377777","0.25"

Profile Results Verification


Effective in version 9.6.0, you can verify the data domain discovery results on multiple columns in the
Developer tool. When you verify the profile results, the Developer tool runs the profile on all rows of the data
source.

Previously, you verified the data domain discovery results for a single column.

Rules
Effective in version 9.6.0, you can select multiple input columns when you apply a rule to a profile in
Informatica Analyst.

Previously, you selected one input column when you applied a rule.

Scorecards
Effective in version 9.6.0, scorecards include the following changes:

• When you select the valid values for a metric, you can view the percentage of selected valid values and
count of total valid values.
Previously, you could view the count of total valid values in the column.
• When you view the source data for a metric, by default, the Drilldown section displays the rows of source
data that are not valid.
Previously, the default value was to display rows that are valid.

370 Chapter 29: Changes to Informatica Data Services (9.6.0)


• In the scorecard results, you can select a score and click the trend chart arrow to view the trend chart.
Previously, you right-clicked the score and selected the Show Trend Chart option.

Scorecards 371
Chapter 30

Changes to Informatica Data


Transformation (9.6.0)
This chapter includes the following topics:

• Export Mapping to PowerCenter, 372


• Invalid CMConfig File, 372

Export Mapping to PowerCenter


You can export a mapping with a Data Processor transformation to PowerCenter.

Invalid CMConfig File


Effective in 9.6.0, a Data Processor transformation cannot run when the [Link] file is an invalid XML
file.

372
Chapter 31

Changes to Informatica Domain


(9.6.0)
This chapter includes the following topics:

• Informatica Services, 373


• Analyst Service, 374
• Content Management Service, 374
• Data Integration Service, 374
• Data Director Service, 374
• Test Data Manager Service, 375
• Model Repository Service Privileges, 375
• Domain Security , 375
• Changes to Supported Platforms, 375

Informatica Services
Effective in version 9.6.0, the Informatica Services include the following changes:

• On Windows, when you run the command [Link] startup to start the Informatica services, the
ISP console window runs in the background.
Previously, the window appeared in the foreground when you ran [Link] startup to start the
Informatica services. Also, if you encounter error messages during the Service Manager startup, the
installer saves the error messages to the [Link] and [Link] log files.
• On Windows, you must be a user with administrative privileges to start the Informatica services from the
command line and the Windows Start menu.
Previously, the user did not need administrative privileges to start the Informatica services.

373
Analyst Service
The following changes apply to the Analyst Service in version 9.6.0:

• Effective in version 9.6.0, the Analyst Service identifies the Data Integration Service that runs Human
tasks.
Previously, the Data Director Service identified the Data Integration Service that runs Human tasks.
• Effective in version 9.6.0, the Staging Database property is obsolete.
Previously, the Analyst Service used the Staging Database property to identify the database that contained
exception record tables.

Content Management Service


Effective in version 9.6.0, you can set the Max Result Count property on the Content Management Service and
on the Address Validator transformation. The property determines the maximum number of address
suggestions that the Address Validator transformation can generate for a single address.

Previously, you set the Max Result Count property on the Address Validator transformation.

Data Integration Service


Effective in version 9.6.0, when you run Data Integration Service jobs in separate operating system
processes, the Data Integration Service maintains a pool of reusable DTM processes. Each job runs in a DTM
process selected from the pool. One DTM process can run multiple DTM instances for related jobs. If you
configure connection pooling, each DTM process maintains its own connection pool library that it can reuse
for related jobs that run in the same DTM process.

Previously when you ran Data Integration Service jobs in separate operating system processes, each job ran
in a separate DTM process. One DTM process ran a single DTM instance. When you ran jobs in separate
operating system processes, the Data Integration Service ignored the connection pooling properties.

Data Director Service


Effective in version 9.6.0, the Data Director Service is obsolete.

Previously, you configured a Data Director Service to identify the Data Integration Service that runs Human
tasks. To identify the Data Integration Service that runs Human tasks in version 9.6.0, configure the Human
Task Properties on the Analyst Service.

The Informatica 9.6.0 upgrade process upgrades a Data Director Service to an Analyst Service. If you upgrade
an Informatica domain that includes a Data Director Service and an Analyst Service, the upgrade process
creates a separate Analyst Service for each service. After you upgrade, you can keep the Analyst Services in
the domain. Optionally, you can merge the services.

374 Chapter 31: Changes to Informatica Domain (9.6.0)


Test Data Manager Service
Effective in version 9.6.0, Test Data Management (TDM) is available as a service on the Informatica domain.
Create and configure a Test Data Manager Service (TDM Service) in the Informatica domain from the
Administrator tool. Define roles and privileges to perform Test Data Management tasks as custom roles for
the TDM Service. The web-based user interface of Test Data Management uses database content from the
repository associated with the TDM Service. You must have installed TDM to be able to create the TDM
Service. You also define security preferences for the TDM service from the Administrator tool.

Previously, TDM was independent of the Informatica domain and not a service on the domain.

Model Repository Service Privileges


Effective in version 9.6.0, the Create Projects privilege for the Model Repository Service is renamed to the
Create, Edit, and Delete Projects privilege. Users must have the Create, Edit, and Delete Projects privilege to
complete the following tasks in the Analyst tool and the Developer tool:

• Create projects.
• Edit projects. Users must also have Write permission on the project.
• Delete projects that the user created. Users must also have Write permission on the project.
Previously, when users had the Create Projects privilege for the Model Repository Service, they could create
projects. When users had Write permission on the project, they could edit and delete the project.

Domain Security
Effective in version 9.6.0, the Enable Transport Layer Security (TLS) for the domain option in the
Administrator tool is renamed Enable Secure Communication. The Enable Secure Communication option
secures the communication between the Service Manager and all services in the Informatica domain. You
can specify a keystore and truststore file for the SSL certificate.

Previously, the Enable Transport Layer Security (TLS) for the domain option in the Administrator tool did not
enable secure communication for the PowerCenter services. The option used the default Informatica SSL
certificate.

Changes to Supported Platforms


Effective in version 9.6.0, Informatica dropped support for 32-bit Linux and for Solaris on x64. Before you
upgrade to Informatica 9.6.0 on a supported 64-bit server, back up the installation and restore it on the 64-bit
server. When you select the Informatica product to upgrade, enter the path to the restored installation. For
more information, see the Informatica upgrade guide.

Test Data Manager Service 375


Chapter 32

Changes to PowerCenter (9.6.0)


This chapter includes the following topics:

• Native Connectivity to Microsoft SQL Server, 376


• Pushdown Optimization for ODBC Sources and Targets, 376
• Repository Connection File Default Location, 376
• Repository Connection File, 377
• Umask Configuration for Operating System Profiles, 377

Native Connectivity to Microsoft SQL Server


Effective in version 9.6.0, you must install the Microsoft SQL Server 2012 Native Client to configure native
connectivity to Microsoft SQL Server databases from Windows machines.

Previously, you did not have to install an SQL client because Informatica used the Microsoft OLE DB provider
for native connectivity.

If you upgrade from an earlier version, you must install the Microsoft SQL Server 2012 Native Client for the
existing mappings to work.

Pushdown Optimization for ODBC Sources and


Targets
Effective in version 9.6.0, Informatica dropped support for pushdown optimization to ODBC sources and
targets.

Repository Connection File Default Location


Effective in version 9.6.0, pmrep stores connection information in [Link] in the home directory by default.
You can store the connection information in a different location when you set the INFA_REPCNX_INFO
environment variable.

Previously, pmrep stored the connection information in [Link] in the directory where you started pmrep.

376
Repository Connection File
Effective in version 9.6.0, each time you run pmrep connect, the command deletes the [Link] file. If the
pmrep connect command succeeds, the command replaces the [Link] file with the repository connection
information.

Previously, the pmrep connect command would not delete the [Link] file each time you ran pmrep
connect.

Umask Configuration for Operating System Profiles


Effective in version 9.6.0, you do not have to set umask to 000 when you configure operating system profiles.

Previously, you had to set umask to 000 to enable operating system profiles to access files written by the
DTM.

If you upgrade from an earlier version, the umask setting is not changed. You can change the umask setting
before or after you upgrade. For example, you can change umask to 077 for maximum security. If you change
the umask setting after you upgrade, you must restart the Informatica services.

Repository Connection File 377


Chapter 33

Changes to PowerCenter Big Data


Edition (9.6.0)
This chapter includes the following topics:

• Hadoop Environment Properties File, 378


• Mappings in the Native Environment, 378

Hadoop Environment Properties File


Effective in 9.6.0, the Hadoop environment properties file [Link] is available at the following
path: <InformaticaInstallationDir>/services/shared/hadoop/<Hadoop_distribution_name>/infaConf

Mappings in the Native Environment


Effective in version 9.6.0, you can enable the Data Integration Service to maximize parallelism when it runs
mappings in the native environment. When you maximize parallelism, the Data Integration Service can use
multiple partitions to process a mapping. By default, each mapping has a maximum parallelism value of
Auto. As a result, each mapping uses the maximum parallelism value set for the Data Integration Service
process.

Previously, you could not enable the Data Integration Service to use multiple partitions to process a mapping
in the native environment. By default, each upgraded mapping has a maximum parallelism value of one. As a
result, partitioning is disabled for upgraded mappings.

378
Chapter 34

Changes to Metadata Manager


(9.6.0)
This chapter includes the following topics:

• Browser Support, 379


• Metadata Manager Agent, 379
• Metadata Manager Business Glossaries, 380
• Metadata Manager Documentation, 380
• mmcmd Changes, 380
• Native Connectivity to Microsoft SQL Server, 381
• Password Modification for Resources, 382

Browser Support
Effective in version 9.6.0, the Metadata Manager application can run in the following web browsers:

• Google Chrome
• Microsoft Internet Explorer
Previously, the Metadata Manager application could run in the following web browsers:

• Microsoft Internet Explorer


• Mozilla Firefox

Metadata Manager Agent


Effective in version 9.6.0, you no longer have to install the Metadata Manager Agent separately for the
following metadata source types:

• Cognos
• Oracle Business Intelligence Enterprise Edition
• Sybase PowerDesigner

379
Previously, you had to install the Metadata Manager Agent separately to extract metadata from these
sources.

Metadata Manager Business Glossaries


Effective in version 9.6.0, Metadata Manager business glossaries are deprecated and replaced with
Informatica Analyst business glossaries.

If you have a Metadata Manager business glossary that you created in a previous version of Metadata
Manager, you must export the glossary from the previous version of Metadata Manager before you upgrade
to version 9.6.0. After you upgrade, you can import the glossary into Informatica Analyst. To view the
Informatica Analyst business glossary in Metadata Manager, create a Business Glossary resource in
Metadata Manager 9.6.0.

Metadata Manager Documentation


Effective in version 9.6.0, the Informatica PowerCenter Metadata Manager Business Glossary Guide is
obsolete.

For information about creating and configuring Business Glossary resources in Metadata Manager, see
Informatica PowerCenter 9.6.0 Metadata Manager Administrator Guide. For information about viewing
Business Glossary resources in Metadata Manager, see Informatica PowerCenter 9.6.0 Metadata Manager
User Guide.

mmcmd Changes
Domain Security Changes
Effective in version 9.6.0, mmcmd has the following changes related to domain security:

Environment Variables

You might have to configure environment variables to run mmcmd. If the domain uses Kerberos
authentication, you must set the KRB5_CONFIG environment variable on your system or in the mmcmd
batch file. If secure communication is enabled for the domain, you must set the INFA_TRUSTSTORE and
INFA_TRUSTSTORE_PASSWORD environment variables in the mmcmd batch file.

Previously, you did not have to configure environment variables for mmcmd.

Command Options

All mmcmd commands that authenticate with the domain contain options related to Kerberos
authentication. You must specify the options if the domain uses Kerberos authentication.

380 Chapter 34: Changes to Metadata Manager (9.6.0)


The following table describes the command options:

Option Description

--domainName (-dn) Required if you use Kerberos authentication and you do not specify the --gateway
option. Name of the Informatica domain.

--gateway (-hp) Required if you use Kerberos authentication and you do not specify the --
domainName option. Host names and port numbers of the gateway nodes in the
domain.

--keyTab (-kt) Required if you use Kerberos authentication and you do not specify a password. Path
and file name of the keytab file for the Metadata Manager user.

--mmServiceName (-mm) Required if you use Kerberos authentication. Name of the Metadata Manager
Service.

--namespace (-n) Required if the domain uses LDAP authentication or Kerberos authentication.
Optional if the domain uses native authentication. Name of the security domain to
which the Metadata Manager user belongs.

--password (-pw) Required if you do not use Kerberos authentication. Also required if you use
Kerberos authentication and you do not specify the --keyTab option. Password for
the Metadata Manager user.

-pcRepositoryNamespace Required if the domain uses LDAP authentication or Kerberos authentication.


Optional if the domain uses native authentication. Name of the security domain to
which the PowerCenter repository user belongs.

--securityDomain (-sdn) Required if the domain uses LDAP authentication or Kerberos authentication.
Optional if the domain uses native authentication. Name of the security domain to
which the Informatica domain user belongs.

Business Glossary Upgrade Changes


Effective in version 9.6.0, mmcmd includes the following command related to upgrading business glossaries:

Command Description

migrateBGLinks Restores the related catalog objects for a business glossary after you upgrade from version 9.5.x.

Native Connectivity to Microsoft SQL Server


Effective in version 9.6.0, you must install the Microsoft SQL Server 2012 Native Client to configure native
connectivity to Microsoft SQL Server databases from Windows machines.

Previously, you did not have to install an SQL client because Informatica used the Microsoft OLE DB provider
for native connectivity.

If you upgrade from an earlier version, you must install the Microsoft SQL Server 2012 Native Client. Install
the client so that the Metadata Manager Service can connect to Microsoft SQL Server databases.

Native Connectivity to Microsoft SQL Server 381


Password Modification for Resources
Effective in version 9.6.0, to change the password for a resource, you edit the resource, enable the Modify
Password option, and enter the new password in the Password field. This change prevents users from
viewing the password with a password revelation tool.

Previously, you edited the resource, selected the string of dots in the Password field, and entered the new
password.

382 Chapter 34: Changes to Metadata Manager (9.6.0)


Chapter 35

Changes to Adapters for


PowerCenter (9.6.0)
This chapter includes the following topics:

• PowerExchange for Facebook , 383


• PowerExchange for Hadoop, 383
• PowerExchange for LinkedIn, 384
• PowerExchange for Microsoft Dynamics CRM, 384
• PowerExchange for SAP NetWeaver, 384
• PowerExchange for Twitter, 385
• PowerExchange for Web Services, 386

PowerExchange for Facebook


Effective in version 9.6.0, Informatica is not shipping PowerExchange for Facebook for PowerCenter.
Informatica dropped support for versions 9.1.0, 9.5.0, and 9.5.1. You cannot upgrade from versions 9.1.0,
9.5.0, 9.5.1, and the hotfix versions. Sessions will fail in versions 9.1.0, 9.5.0, and 9.5.1, and the hotfix
versions.

You can use PowerExchange for Facebook in the Developer tool.

For more information, see the End of Life (EOL) document at the following location:
[Link]

PowerExchange for Hadoop


Effective in version 9.6.0, you must re-create HDFS connections using the NameNode URI property.
Previously, HDFS connection properties Host Name and HDFS port was used to create HDFS connections. If
you are upgrading from a previous release, you must re-create HDFS connections.

When you configure an HDFS connection, the default Hadoop distribution is Cloudera distribution. Previously,
the default was Apache distribution.

383
PowerExchange for LinkedIn
Effective in version 9.6.0, Informatica is not shipping PowerExchange for LinkedIn for PowerCenter.
Informatica dropped support for versions 9.1.0, 9.5.0, and 9.5.1. You cannot upgrade from versions 9.1.0,
9.5.0, 9.5.1, and the hotfix versions. Sessions will fail in versions 9.1.0, 9.5.0, and 9.5.1, and the hotfix
versions.

You can use PowerExchange for LinkedIn in the Developer tool.

For more information, see the End of Life (EOL) document at the following location:
[Link]

PowerExchange for Microsoft Dynamics CRM


Effective in version 9.6.0, download and use version 7 of the Java Cryptography Extension (JCE) Unlimited
Strength Jurisdiction Policy Files.

Previously, you had to download and use version 6 of the Java Cryptography Extension (JCE) Unlimited
Strength Jurisdiction Policy Files.

PowerExchange for SAP NetWeaver


Effective in version 9.6.0, PowerExchange for SAP NetWeaver includes the following changes:

SAP SDK libraries

PowerExchange for SAP NetWeaver uses SAP NetWeaver RFC SDK 7.20 libraries. You must install SAP
NetWeaver RFC SDK 7.20 libraries to run PowerExchange for SAP sessions.

Previously, you installed SAP RFC SDK classic libraries to run sessions.

SAP configuration file

You use the [Link] file to configure RFC-specific parameters and connection information.

Previously, you used the [Link] file to configure RFC-specific parameters and connection
information.

If you upgrade from an earlier version, you must create a [Link] file to enable communication
between PowerCenter and SAP. You cannot use the [Link] file to enable communication between
PowerCenter and SAP.

For more information, see the Informatica PowerExchange for SAP 9.6.0 User Guide for PowerCenter.

SAP connection type parameter

You need not use the SAP connection parameter TYPE in the [Link] file to configure the
connection type. The PowerCenter Client and PowerCenter Integration Service use the connection
parameters that you define in the [Link] file to infer the connection type.

For example, if you set the ASHOST parameter, the PowerCenter Client and PowerCenter Integration
Service create a connection to a specific SAP application server. If you set the MSHOST and GROUP
parameters, the PowerCenter Client and PowerCenter Integration Service create an SAP load balancing

384 Chapter 35: Changes to Adapters for PowerCenter (9.6.0)


connection. If you set the PROGRAM_ID, GWHOST, and GWSERV parameters, the PowerCenter Client and
PowerCenter Integration Service create a connection to an RFC server program registered at an SAP
gateway.

Previously, you used the parameter TYPE to configure the connection type. For example, you set TYPE=A
to create a connection to a specific application server. You set TYPE=B to create an SAP load balancing
connection and you set TYPE=R to create a connection to an RFC server program registered at an SAP
gateway.

If you upgrade from an earlier version, you must create a new [Link] file and configure the
connection parameters based on the type of connection that you want to create.

For more information, see the Informatica PowerExchange for SAP 9.6.0 User Guide for PowerCenter.

ABAP stream mode sessions

PowerExchange for SAP NetWeaver uses the RFC protocol to generate and install an ABAP program in
stream mode.

Previously, PowerExchange for SAP NetWeaver used the CPI-C protocol to generate and install an ABAP
program in stream mode.

Effective in version 9.6.0, the CPI-C protocol is deprecated and Informatica will drop support in a future
release. You can run existing ABAP programs that use the CPI-C protocol. However, you cannot generate
and install new ABAP programs that use the CPI-C protocol.

When you install an existing ABAP program that uses the CPI-C protocol, you are prompted to overwrite
the program to use the RFC protocol. Informatica recommends overwriting the program to use the RFC
protocol.

BAPI and IDoc mappings

Effective in version 9.6.0, Informatica dropped support for deprecated BAPI mappings created in
versions earlier than 8.5 and deprecated IDOC mappings created in versions earlier than 7.1. If you
upgrade the deprecated mappings to version 9.6.0, the sessions will fail.

Upgrade PowerExchange for SAP NetWeaver and create new BAPI and IDoc mappings with custom
transformations.

PowerExchange for Twitter


Effective in version 9.6.0, Informatica is not shipping PowerExchange for Twitter for PowerCenter.
Informatica dropped support for versions 9.1.0, 9.5.0, and 9.5.1. You cannot upgrade from versions 9.1.0,
9.5.0, 9.5.1, and the hotfix versions. Sessions will fail in versions 9.1.0, 9.5.0, and 9.5.1, and the hotfix
versions.

You can use PowerExchange for Twitter in the Developer tool.

For more information, see the End of Life (EOL) document at the following location:
[Link]

PowerExchange for Twitter 385


PowerExchange for Web Services
SOAP 1.2
Effective in version 9.6.0, each web service can have one or more operations that use either a SOAP 1.1
binding or a SOAP 1.2 binding or both a SOAP 1.1 and a SOAP 1.2 binding. You can create a Web Service
Consumer transformation with a SOAP 1.1 and SOAP 1.2 binding. The SOAP request can be of SOAP 1.1 or
SOAP 1.2 format.

Previously, you could only create an operation with a SOAP 1.1 binding. You could only create a Web Service
Consumer transformation with a SOAP 1.1 binding.

NTLMv2
Effective in version 9.6.0, the external web service provider authenticates the PowerCenter Integration
Service by using NTLM v1 or NTLM v2.

Previously, the external web service provider used only NTLM v1 to authenticate the PowerCenter Integration
Service.

386 Chapter 35: Changes to Adapters for PowerCenter (9.6.0)


Chapter 36

Changes to Adapters for


Informatica (9.6.0)
This chapter includes the following topics:

• PowerExchange for DataSift, 387


• PowerExchange for Facebook , 387
• PowerExchange for LinkedIn, 388
• PowerExchange for Salesforce , 388
• PowerExchange for SAP NetWeaver, 388
• PowerExchange for Twitter, 388
• PowerExchange for Web Content-Kapow Katalyst , 388

PowerExchange for DataSift


Effective in version 9.6.0, PowerExchange for DataSift installs with Informatica 9.6.0.

Previously, PowerExchange for DataSift had a separate installer.

PowerExchange for Facebook


• Effective in version 9.6.0, PowerExchange for Facebook installs with Informatica 9.6.0.
Previously, PowerExchange for Facebook had a separate installer.
• Effective in version 9.6.0, when you use the Self resource, you can specify the user name and a list of user
IDs or user names to extract the profile of the user.
Previously, when you used the Self resource, you could only specify the user ID or the Facebook operator
me to extract the profile of the current user.
• Effective in version 9.6.0, when you use the Profile Feed resource, you can specify the user name to
extract the news feeds or Facebook posts of the user.
Previously, when you used the Profile Feed resource, you could only specify the user ID or the Facebook
operator me to extract the news feeds of the current user.

387
PowerExchange for LinkedIn
Effective in version 9.6.0, PowerExchange for LinkedIn installs with Informatica 9.6.0.

Previously, PowerExchange for LinkedIn had a separate installer.

PowerExchange for Salesforce


Effective in version 9.6.0, PowerExchange for Salesforce installs with Informatica 9.6.0.

Previously, PowerExchange for Salesforce had a separate installer.

PowerExchange for SAP NetWeaver


Effective in version 9.6.0, PowerExchange for SAP NetWeaver uses SAP NetWeaver RFC SDK 7.20 libraries.
You must install SAP NetWeaver RFC SDK 7.20 libraries to run PowerExchange for SAP sessions.

Previously, you installed SAP RFC SDK classic libraries to run sessions.

PowerExchange for Twitter


• Effective in version 9.6.0, you cannot use basic authentication while creating a Twitter streaming
connection.
Previously, you could use basic authentication while creating a Twitter streaming connection.
• Effective in version 9.6.0, PowerExchange for Twitter installs with Informatica 9.6.0.
Previously, PowerExchange for Twitter had a separate installer.

PowerExchange for Web Content-Kapow Katalyst


Effective in version 9.6.0, PowerExchange for Web Content-Kapow Katalyst installs with Informatica 9.6.0.

Previously, PowerExchange for Web Content-Kapow Katalyst had a separate installer.

388 Chapter 36: Changes to Adapters for Informatica (9.6.0)

You might also like