INFORMATICA
INFORMATICA
10.2
Release Guide
Informatica Release Guide
10.2
September 2017
© Copyright Informatica LLC 2003, 2018
This software and documentation are provided only under a separate license agreement containing restrictions on use and disclosure. No part of this document may be
reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica LLC.
Informatica, the Informatica logo, PowerCenter, PowerExchange, Big Data Management and Live Data Map are trademarks or registered trademarks of Informatica LLC
in the United States and many jurisdictions throughout the world. A current list of Informatica trademarks is available on the web at [Link]
[Link]. Other company and product names may be trade names or trademarks of their respective owners.
Portions of this software and/or documentation are subject to copyright held by third parties, including without limitation: Copyright DataDirect Technologies. All rights
reserved. Copyright © Sun Microsystems. All rights reserved. Copyright © RSA Security Inc. All Rights Reserved. Copyright © Ordinal Technology Corp. All rights
reserved. Copyright © Aandacht c.v. All rights reserved. Copyright Genivia, Inc. All rights reserved. Copyright Isomorphic Software. All rights reserved. Copyright © Meta
Integration Technology, Inc. All rights reserved. Copyright © Intalio. All rights reserved. Copyright © Oracle. All rights reserved. Copyright © Adobe Systems Incorporated.
All rights reserved. Copyright © DataArt, Inc. All rights reserved. Copyright © ComponentSource. All rights reserved. Copyright © Microsoft Corporation. All rights
reserved. Copyright © Rogue Wave Software, Inc. All rights reserved. Copyright © Teradata Corporation. All rights reserved. Copyright © Yahoo! Inc. All rights reserved.
Copyright © Glyph & Cog, LLC. All rights reserved. Copyright © Thinkmap, Inc. All rights reserved. Copyright © Clearpace Software Limited. All rights reserved. Copyright
© Information Builders, Inc. All rights reserved. Copyright © OSS Nokalva, Inc. All rights reserved. Copyright Edifecs, Inc. All rights reserved. Copyright Cleo
Communications, Inc. All rights reserved. Copyright © International Organization for Standardization 1986. All rights reserved. Copyright © ej-technologies GmbH. All
rights reserved. Copyright © Jaspersoft Corporation. All rights reserved. Copyright © International Business Machines Corporation. All rights reserved. Copyright ©
yWorks GmbH. All rights reserved. Copyright © Lucent Technologies. All rights reserved. Copyright © University of Toronto. All rights reserved. Copyright © Daniel
Veillard. All rights reserved. Copyright © Unicode, Inc. Copyright IBM Corp. All rights reserved. Copyright © MicroQuill Software Publishing, Inc. All rights reserved.
Copyright © PassMark Software Pty Ltd. All rights reserved. Copyright © LogiXML, Inc. All rights reserved. Copyright © 2003-2010 Lorenzi Davide, All rights reserved.
Copyright © Red Hat, Inc. All rights reserved. Copyright © The Board of Trustees of the Leland Stanford Junior University. All rights reserved. Copyright © EMC
Corporation. All rights reserved. Copyright © Flexera Software. All rights reserved. Copyright © Jinfonet Software. All rights reserved. Copyright © Apple Inc. All rights
reserved. Copyright © Telerik Inc. All rights reserved. Copyright © BEA Systems. All rights reserved. Copyright © PDFlib GmbH. All rights reserved. Copyright ©
Orientation in Objects GmbH. All rights reserved. Copyright © Tanuki Software, Ltd. All rights reserved. Copyright © Ricebridge. All rights reserved. Copyright © Sencha,
Inc. All rights reserved. Copyright © Scalable Systems, Inc. All rights reserved. Copyright © jQWidgets. All rights reserved. Copyright © Tableau Software, Inc. All rights
reserved. Copyright© MaxMind, Inc. All Rights Reserved. Copyright © TMate Software s.r.o. All rights reserved. Copyright © MapR Technologies Inc. All rights reserved.
Copyright © Amazon Corporate LLC. All rights reserved. Copyright © Highsoft. All rights reserved. Copyright © Python Software Foundation. All rights reserved.
Copyright © [Link]. All rights reserved. Copyright © CNRI. All rights reserved.
This product includes software developed by the Apache Software Foundation ([Link] and/or other software which is licensed under various
versions of the Apache License (the "License"). You may obtain a copy of these Licenses at [Link] Unless required by applicable law or
agreed to in writing, software distributed under these Licenses is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
or implied. See the Licenses for the specific language governing permissions and limitations under the Licenses.
This product includes software which was developed by Mozilla ([Link] software copyright The JBoss Group, LLC, all rights reserved; software
copyright © 1999-2006 by Bruno Lowagie and Paulo Soares and other software which is licensed under various versions of the GNU Lesser General Public License
Agreement, which may be found at http:// [Link]/licenses/[Link]. The materials are provided free of charge by Informatica, "as-is", without warranty of any
kind, either express or implied, including but not limited to the implied warranties of merchantability and fitness for a particular purpose.
The product includes ACE(TM) and TAO(TM) software copyrighted by Douglas C. Schmidt and his research group at Washington University, University of California,
Irvine, and Vanderbilt University, Copyright (©) 1993-2006, all rights reserved.
This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit (copyright The OpenSSL Project. All Rights Reserved) and
redistribution of this software is subject to terms available at [Link] and [Link]
This product includes Curl software which is Copyright 1996-2013, Daniel Stenberg, <daniel@[Link]>. All Rights Reserved. Permissions and limitations regarding this
software are subject to terms available at [Link] Permission to use, copy, modify, and distribute this software for any purpose with or
without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.
The product includes software copyright 2001-2005 (©) MetaStuff, Ltd. All Rights Reserved. Permissions and limitations regarding this software are subject to terms
available at [Link] [Link].
The product includes software copyright © 2004-2007, The Dojo Foundation. All Rights Reserved. Permissions and limitations regarding this software are subject to
terms available at [Link]
This product includes ICU software which is copyright International Business Machines Corporation and others. All rights reserved. Permissions and limitations
regarding this software are subject to terms available at [Link]
This product includes software copyright © 1996-2006 Per Bothner. All rights reserved. Your right to use such materials is set forth in the license which may be found at
http:// [Link]/software/ kawa/[Link].
This product includes OSSP UUID software which is Copyright © 2002 Ralf S. Engelschall, Copyright © 2002 The OSSP Project Copyright © 2002 Cable & Wireless
Deutschland. Permissions and limitations regarding this software are subject to terms available at [Link]
This product includes software developed by Boost ([Link] or under the Boost software license. Permissions and limitations regarding this software
are subject to terms available at http:/ /[Link]/LICENSE_1_0.txt.
This product includes software copyright © 1997-2007 University of Cambridge. Permissions and limitations regarding this software are subject to terms available at
http:// [Link]/[Link].
This product includes software copyright © 2007 The Eclipse Foundation. All Rights Reserved. Permissions and limitations regarding this software are subject to terms
available at http:// [Link]/org/documents/[Link] and at [Link]
This product includes software licensed under the terms at [Link] [Link] http://
[Link]/doc/ [Link], [Link] [Link] [Link] http://
[Link]/doc/ [Link], [Link] , [Link] [Link]
release/[Link], [Link] [Link] [Link] [Link]
license-agreements/fuse-message-broker-v-5-3- license-agreement; [Link] [Link] [Link]
[Link]; [Link] [Link] [Link] . [Link]
Consortium/Legal/2002/copyright-software-20021231; [Link] [Link] [Link]
[Link]; [Link] [Link] [Link] [Link]
software/tcltk/[Link], [Link] [Link] [Link] [Link]
iodbc/wiki/iODBC/License; [Link] [Link] [Link]
[Link]; [Link] [Link] [Link] [Link]
[Link] [Link] [Link] [Link] [Link]
EaselJS/blob/master/src/easeljs/display/[Link]; [Link] [Link] http://
[Link]/[Link]; [Link] [Link]
LICENSE; [Link] [Link] [Link]
master/LICENSE; [Link] [Link] [Link]
LICENSE; [Link] [Link] [Link]
[Link]/[Link]; [Link] [Link]
[Link]; [Link] [Link] [Link]
[Link]; [Link] and [Link]
This product includes software licensed under the Academic Free License ([Link] the Common Development and
Distribution License ([Link] the Common Public License ([Link] the Sun Binary
Code License Agreement Supplemental License Terms, the BSD License (http:// [Link]/licenses/[Link]), the new BSD License (http://
[Link]/licenses/BSD-3-Clause), the MIT License ([Link] the Artistic License ([Link]
licenses/artistic-license-1.0) and the Initial Developer’s Public License Version 1.0 ([Link]
This product includes software copyright © 2003-2006 Joe WaInes, 2006-2007 XStream Committers. All rights reserved. Permissions and limitations regarding this
software are subject to terms available at [Link] This product includes software developed by the Indiana University Extreme! Lab.
For further information please visit [Link]
This product includes software Copyright (c) 2013 Frank Balluffi and Markus Moeller. All rights reserved. Permissions and limitations regarding this software are subject
to terms of the MIT license.
DISCLAIMER: Informatica LLC provides this documentation "as is" without warranty of any kind, either express or implied, including, but not limited to, the implied
warranties of noninfringement, merchantability, or use for a particular purpose. Informatica LLC does not warrant that this software or documentation is error free. The
information provided in this software or documentation may include technical inaccuracies or typographical errors. The information in this software and documentation
is subject to change at any time without notice.
NOTICES
This Informatica product (the "Software") includes certain drivers (the "DataDirect Drivers") from DataDirect Technologies, an operating company of Progress Software
Corporation ("DataDirect") which are subject to the following terms and conditions:
1. THE DATADIRECT DRIVERS ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.
2. IN NO EVENT WILL DATADIRECT OR ITS THIRD PARTY SUPPLIERS BE LIABLE TO THE END-USER CUSTOMER FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, CONSEQUENTIAL OR OTHER DAMAGES ARISING OUT OF THE USE OF THE ODBC DRIVERS, WHETHER OR NOT INFORMED OF THE POSSIBILITIES
OF DAMAGES IN ADVANCE. THESE LIMITATIONS APPLY TO ALL CAUSES OF ACTION, INCLUDING, WITHOUT LIMITATION, BREACH OF CONTRACT, BREACH
OF WARRANTY, NEGLIGENCE, STRICT LIABILITY, MISREPRESENTATION AND OTHER TORTS.
The information in this documentation is subject to change without notice. If you find any problems in this documentation, report them to us at
infa_documentation@[Link].
Informatica products are warranted according to the terms and conditions of the agreements under which they are provided. INFORMATICA PROVIDES THE
INFORMATION IN THIS DOCUMENT "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT.
Part I: 10.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4 Table of Contents
infacmd wfs Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
infasetup Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
pmrep Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Data Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Informatica Data Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Enterprise Information Catalog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
New Data Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Custom Scanner Framework. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
REST APIs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Composite Data Domains. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Data Domains. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Export and Import of Custom Attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Rich Text as Custom Attribute Value. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Transformation Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Unstructured File Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Value Frequency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Deployment Support for Azure HDInsight. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Informatica Analyst. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Intelligent Data Lake. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Validate and Assess Data Using Visualization with Apache Zeppelin. . . . . . . . . . . . . . . . . . 45
Assess Data Using Filters During Data Preview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Enhanced Layout of Recipe Panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Apply Data Quality Rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
View Business Terms for Data Assets in Data Preview and Worksheet View. . . . . . . . . . . . . 46
Prepare Data for Delimited Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Edit Joins in a Joined Worksheet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Edit Sampling Settings for Data Preparation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Support for Multiple Enterprise Information Catalog Resources in the Data Lake. . . . . . . . . . 46
Use Oracle for the Data Preparation Service Repository. . . . . . . . . . . . . . . . . . . . . . . . . . 46
Improved Scalability for the Data Preparation Service. . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Nonrelational Data Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Informatica Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Informatica Upgrade Advisor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Intelligent Streaming. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
CSV Format. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Data Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Pass-Through Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Table of Contents 5
Sources and Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Transformation Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Cloudera Navigator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
PowerExchange Adapters for PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Rule Specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
User Activity Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Transformation Language. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Informatica Transformation Language. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
PowerCenter Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Informatica Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
6 Table of Contents
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
PowerExchange Adapters for PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
SAML Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Informatica Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Chapter 5: New Features, Changes, and Release Tasks (10.1.1 HotFix 1). . . . 82
New Products (10.1.1 HotFix 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
PowerExchange for Cloud Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
New Features (10.1.1 HotFix 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Informatica Analyst. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Changes (10.1.1 HotFix 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Support Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Chapter 6: New Features, Changes, and Release Tasks (10.1.1 Update 2). . . . 87
New Products (10.1.1 Update 2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
PowerExchange for MapR-DB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
New Features (10.1.1 Update 2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Big Data Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Enterprise Information Catalog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Intelligent Data Lake. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Changes (10.1.1 Update 2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Support Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Big Data Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Enterprise Information Catalog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Chapter 7: New Features, Changes, and Release Tasks (10.1.1 Update 1). . . . 94
New Features (10.1.1 Update 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Big Data Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Table of Contents 7
Changes (10.1.1 Update 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Release Tasks (10.1.1 Update 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
8 Table of Contents
Exporting Data to External Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Configuring Sampling Criteria for Data Preparation. . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Performing a Lookup on Worksheets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Downloading as a TDE File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Sentry and Ranger Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Informatica Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Dataset Extraction for Cloudera Navigator Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Mapping Extraction for Informatica Platform Resources. . . . . . . . . . . . . . . . . . . . . . . . . 110
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
PowerExchange® Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
PowerExchange Adapters for PowerCenter®. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Custom Kerberos Libraries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Scheduler Service Support in Kerberos-Enabled Domains. . . . . . . . . . . . . . . . . . . . . . . . 113
Single Sign-on for Informatica Web Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Web Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Informatica Web Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Informatica Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Table of Contents 9
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Informatica Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Enterprise information Catalog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
HDFS Scanner Enhancement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Relationships View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Cloudera Navigator Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Netezza Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
PowerExchange Adapters for Informatica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
PowerExchange Adapters for PowerCenter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
InformaticaTransformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Informatica Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Metadata Manager Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
PowerExchange for SAP NetWeaver Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . 130
10 Table of Contents
Transformation Support on the Blaze Engine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Business Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Inherit Glossary Content Managers to All Assets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Bi-directional Custom Relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Custom Colors in the Relationship View Diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Schema Names in IBM DB2 Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Command Line Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Exception Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Informatica Administrator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Domain View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Informatica Analyst. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Generate Source File Name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Import from PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Copy Text Between Excel and the Developer Tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Logical Data Object Read and Write Mapping Editing. . . . . . . . . . . . . . . . . . . . . . . . . . . 151
DDL Query. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Informatica Development Platform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Live Data Map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Email Notifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Keyword Search. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Profiling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Scanners. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Informatica Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Universal Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Incremental Loading for Oracle and Teradata Resources. . . . . . . . . . . . . . . . . . . . . . . . . 155
Hiding Resources in the Summary View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Creating an SQL Server Integration Services Resource from Multiple Package Files. . . . . . . . 155
Metadata Manager Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Application Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Migrate Business Glossary Audit Trail History and Links to Technical Metadata. . . . . . . . . . 156
PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
PowerExchange Adapters for PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Table of Contents 11
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
PowerCenter Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
12 Table of Contents
Part IV: Version 10.0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Table of Contents 13
Dependency Graph. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Informatica Analyst. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Asset Versioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Generate and Execute DDL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Generate Relational and Flat File Metadata at Run Time. . . . . . . . . . . . . . . . . . . . . . . . . 202
Import from PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Monitoring Tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Object Versioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
Physical Data Objects in an Application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Informatica Development Platform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Informatica Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Tableau Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Data Lineage Enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Metadata Catalog Views. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Impala Queries in Cloudera Navigator Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Parameters in Informatica Platform Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Recent History. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Related Catalog Objects and Impact Summary Filter and Sort. . . . . . . . . . . . . . . . . . . . . 214
Session Task Instances in the Impact Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Application and Data Lineage Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Reference Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Rule Specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Privileges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Roles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Transformation Language Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Informatica Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Informatica Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
14 Table of Contents
Chapter 18: Changes (10.0). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Changed Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Application Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Analyst Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Data Integration Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Model Repository Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
SAP BW Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Big Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Business Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Relationship View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Asset Phase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Library Workspace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Import and Export. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Informatica Administrator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Domain tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Informatica Analyst. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Scorecards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Application Deployment Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Flat File Data Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Microsoft SQL Server Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Logical Data Object Editing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Pushdown Optimization for ODBC Sources and Targets. . . . . . . . . . . . . . . . . . . . . . . . . 242
Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Parameter Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Partitioned Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Pushdown Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Run-time Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
ODBC Connectivity for Informix Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
ODBC Connectivity for Microsoft SQL Server Resources. . . . . . . . . . . . . . . . . . . . . . . . . 244
Impact Summary for PowerCenter Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Maximum Concurrent Resource Loads. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Search. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Metadata Manager Log File Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Table of Contents 15
Business Glossary Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Profiling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Informix Native Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
pmrep Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
PowerCenter Data Profiling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
PowerExchange Adapters for Informatica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Reference Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Rule Specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Sources and Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Sources and Targets in PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Informatica Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Chapter 20: New Features, Changes, and Release Tasks (9.6.1 HotFix 4). . . 260
New Features (9.6.1 HotFix 4). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
Exception Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Informatica Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Changes (9.6.1 HotFix 4). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Change to Support in Version 9.6.1 HotFix 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Application Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Informatica Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Informatica Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Changes to Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
16 Table of Contents
Release Tasks (9.6.1 HotFix 4). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Chapter 21: New Features, Changes, and Release Tasks (9.6.1 HotFix 3). . . 270
New Features (9.6.1 HotFix 3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Business Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Informatica Data Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
PowerExchange Adapters for PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Changes (9.6.1 HotFix 3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Business Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Release Tasks (9.6.1 HotFix 3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Chapter 22: New Features, Changes, and Release Tasks (9.6.1 HotFix 2). . . 279
New Features (9.6.1 HotFix 2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Big Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Business Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Data Quality Accelerators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Informatica Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
PowerExchange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Changes (9.6.1 HotFix 2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Informatica Analyst. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Release Tasks (9.6.1 HotFix 2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Chapter 23: New Features, Changes, and Release Tasks (9.6.1 HotFix 1). . . 296
New Features (9.6.1 HotFix 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Table of Contents 17
Big Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Business Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Command Line Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Data Quality Accelerators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Informatica Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Reference Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Rule Specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Changes (9.6.1 HotFix 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Application Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Business Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Informatica Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
PowerCenter Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
PowerExchange. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
Reference Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
Release Tasks (9.6.1 HotFix 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Informatica Web Client Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
18 Table of Contents
Installer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Informatica Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
PowerExchange. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Listener Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
infacmd pwx Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
PowerExchange Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Informatica Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
PowerCenter Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
Profiles and Scorecards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Reference Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
Rule Specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
Sources and Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
Informatica Sources and Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
PowerCenter Sources and Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Transformation Language Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Informatica Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Table of Contents 19
Informatica Data Explorer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
Informatica Data Quality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Informatica Data Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Informatica Data Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Informatica Development Platform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Informatica Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Informatica Domain Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
Command Line Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
PowerCenter Big Data Edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
PowerCenter Advanced Edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Metadata Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
PowerExchange Adapters for PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
PowerExchange Adapters for Informatica. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Informatica Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
20 Table of Contents
Scorecards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
Table of Contents 21
PowerExchange for Twitter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
PowerExchange for Web Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
22 Table of Contents
Preface
The Informatica Release Guide lists new features and enhancements, behavior changes between versions,
and tasks you might need to perform after you upgrade from a previous version. The Informatica Release
Guide is written for all types of users who are interested in the new features and changed behavior. This
guide assumes that you have knowledge of the features for which you are responsible.
Informatica Resources
Informatica Network
Informatica Network hosts Informatica Global Customer Support, the Informatica Knowledge Base, and other
product resources. To access Informatica Network, visit [Link]
To access the Knowledge Base, visit [Link] If you have questions, comments, or ideas
about the Knowledge Base, contact the Informatica Knowledge Base team at
KB_Feedback@[Link].
Informatica Documentation
To get the latest documentation for your product, browse the Informatica Knowledge Base at
[Link]
If you have questions, comments, or ideas about this documentation, contact the Informatica Documentation
team through email at infa_documentation@[Link].
23
Informatica Product Availability Matrixes
Product Availability Matrixes (PAMs) indicate the versions of operating systems, databases, and other types
of data sources and targets that a product release supports. If you are an Informatica Network member, you
can access PAMs at
[Link]
Informatica Velocity
Informatica Velocity is a collection of tips and best practices developed by Informatica Professional
Services. Developed from the real-world experience of hundreds of data management projects, Informatica
Velocity represents the collective knowledge of our consultants who have worked with organizations from
around the world to plan, develop, deploy, and maintain successful data management solutions.
If you are an Informatica Network member, you can access Informatica Velocity resources at
[Link]
If you have questions, comments, or ideas about Informatica Velocity, contact Informatica Professional
Services at ips@[Link].
Informatica Marketplace
The Informatica Marketplace is a forum where you can find solutions that augment, extend, or enhance your
Informatica implementations. By leveraging any of the hundreds of solutions from Informatica developers
and partners, you can improve your productivity and speed up time to implementation on your projects. You
can access Informatica Marketplace at [Link]
To find your local Informatica Global Customer Support telephone number, visit the Informatica website at
the following link:
[Link]
If you are an Informatica Network member, you can use Online Support at [Link]
24 Preface
Part I: 10.2
This part contains the following chapters:
25
Chapter 1
• PowerExchange Adapters, 26
PowerExchange Adapters
For more information, see the Informatica PowerExchange for Microsoft Azure Data Lake Store User Guide.
26
Chapter 2
• Application Services, 27
• Big Data , 28
• Command Line Programs, 31
• Data Types, 39
• Documentation, 40
• Enterprise Information Catalog, 41
• Informatica Analyst, 44
• Intelligent Data Lake, 45
• Informatica Developer, 47
• Informatica Installation, 47
• Intelligent Streaming, 47
• Metadata Manager, 49
• PowerCenter, 49
• PowerExchange Adapters, 50
• Rule Specifications, 54
• Security, 54
• Transformation Language, 55
• Transformations, 56
• Workflows, 60
Application Services
This section describes new application service features in 10.2.
27
Import Objects from Previous Versions
Effective in version 10.2, you can use infacmd to upgrade objects exported from an Informatica 10.1 or
10.1.1 Model repository to the current metadata format, and then import the upgraded objects into the
current Informatica release.
For more information, see the "Object Import and Export" chapter in the Informatica 10.2 Developer Tool
Guide, or the "infacmd mrs Command Reference" chapter in the Informatica 10.2 Command Reference.
Big Data
This section describes new big data features in 10.2.
When you run a mapping , the Data Integration Service checks for the binary files on the cluster. If they do not
exist or if they are not synchronized, the Data Integration Service prepares the files for transfer. It transfers
the files to the distributed cache through the Informatica Hadoop staging directory on HDFS. By default, the
staging directory is /tmp. This process replaces the requirement to install distribution packages on the
Hadoop cluster.
For more information, see the Informatica Big Data Management 10.2 Hadoop Integration Guide.
Cluster Configuration
A cluster configuration is an object in the domain that contains configuration information about the Hadoop
cluster. The cluster configuration enables the Data Integration Service to push mapping logic to the Hadoop
environment.
When you create the cluster configuration, you import cluster configuration properties that are contained in
configuration site files. You can import these properties directly from a cluster or from a cluster
configuration archive file. You can also create connections to associate with the cluster configuration.
Previously, you ran the Hadoop Configuration Manager utility to configure connections and other information
to enable the Informatica domain to communicate with the cluster.
For more information about cluster configuration, see the "Cluster Configuration" chapter in the Informatica
Big Data Management 10.2 Administrator Guide.
Develop mappings with complex ports, operators, and functions to perform the following tasks:
All queues are persisted by default. If the Data Integration Service node shuts down unexpectedly, the queue
does not fail over when the Data Integration Service fails over. The queue remains on the Data Integration
Service machine, and the Data Integration Service resumes processing the queue when you restart it.
By default, each queue can hold 10,000 jobs at a time. When the queue is full, the Data Integration Service
rejects job requests and marks them as failed. When the Data Integration Service starts running jobs in the
queue, you can deploy additional jobs.
For more information, see the "Queuing" chapter in the Informatica Big Data Management 10.2 Administrator
Guide.
For more information, see the "Connections" chapter in the Big Data Management 10.2 User Guide.
Big Data 29
Data Integration Service Properties for Hadoop Integration
Effective in version 10.2, the Data Integration Service added properties required to integrate the domain with
the Hadoop environment.
Property Description
Hadoop Staging The HDFS directory where the Data Integration Services pushes Informatica Hadoop binaries and
Directory stores temporary files during processing. Default is /tmp.
Hadoop Staging Required if the Data Integration Service user is empty. The HDFS user that performs operations on
User the Hadoop staging directory. The user needs write permissions on Hadoop staging directory.
Default is the Data Integration Service user.
Custom Hadoop The local path to the Informatica Hadoop binaries compatible with the Hadoop operating system.
OS Path Required when the Hadoop cluster and the Data Integration Service are on different supported
operating systems.
Download and extract the Informatica binaries for the Hadoop cluster on the machine that hosts
the Data Integration Service. The Data Integration Service uses the binaries in this directory to
integrate the domain with the Hadoop cluster.
The Data Integration Service can synchronize the following operating systems:
- SUSE 11 and Redhat 6.5
Changes take effect after you recycle the Data Integration Service.
As a result of the changes in cluster integration, the following properties are removed from the Data
Integration Service:
For more information, see the Informatica 10.2 Hadoop Integration Guide.
Sqoop
Effective in version 10.2, if you use Sqoop data objects, you can use the following specialized Sqoop
connectors to run mappings on the Spark engine:
For more information, see the Informatica Big Data Management 10.2 User Guide.
Autoscaling enables the EMR cluster administrator to establish threshold-based rules for adding and
subtracting cluster task and core nodes. Big Data Management certifies support for Spark mappings that run
on an autoscaling-enabled EMR cluster.
• Update Strategy. Supports targets that are ORC bucketed on all columns.
For more information, see the "Mapping Objects in a Hadoop Environment" chapter in the Informatica Big
Data Management 10.2 User Guide.
For information about how to configure mappings for the Blaze engine, see the "Mappings in a Hadoop
Environment" chapter in the Informatica Big Data Management 10.2 User Guide.
• Normalizer
• Rank
• Update Strategy
Effective in version 10.2, the following transformations have additional support on the Spark engine:
• Lookup. Supports unconnected lookup from the Filter, Aggregator, Router, Expression, and Update
Strategy transformation.
For more information, see the "Mapping Objects in a Hadoop Environment" chapter in the Informatica Big
Data Management 10.2 User Guide.
For information about how to configure mappings for the Spark engine, see the "Mappings in a Hadoop
Environment" chapter in the Informatica Big Data Management 10.2 User Guide.
Command Description
createConfiguration Creates a new cluster configuration either from XML files or remote cluster
manager.
listAssociatedConnections Lists connections by type that are associated with the specified cluster
configuration.
listConfigurationGroupPermissions Lists the permissions that a group has for a cluster configuration.
listConfigurationUserPermissions Lists the permissions that a user has for a cluster configuration.
refreshConfiguration Refreshes a cluster configuration either from XML files or remote cluster
manager.
For more information, see the "infacmd cluster Command Reference" chapter in the Informatica 10.2
Command Reference.
Command Description
For more information, see the "infacmd dis Command Reference" chapter in the Informatica 10.2 Command
Reference.
Command Description
For more information, see the "infacmd ipc Command Reference" chapter in the Informatica 10.2 Command
Reference.
Command Description
getUserActivityLog Returns user activity log data, which now includes successful and unsuccessful user
login attempts from Informatica clients.
The user activity data includes the following properties for each login attempt from an
Informatica client:
- Application name
- Application version
- Host name or IP address of the application host
If the client sets custom properties on login requests, the data includes the custom
properties.
listConnections Lists connection names by type. You can list by all connection types or filter the results
by one connection type.
The -ct option is now available for the command. Use the -ct option to filter connection
types.
purgeLog Purges log events and database records for license usage.
The -lu option is now obsolete.
SwitchToGatewayNode The following options are added for configuring SAML authentication:
- asca. The alias name specified when importing the identity provider assertion signing
certificate into the truststore file used for SAML authentication.
- saml. Enabled or disabled SAML authentication in the Informatica domain.
- std. The directory containing the custom truststore file required to use SAML
authentication on gateway nodes within the domain.
- stp. The custom truststore password used for SAML authentication.
For more information, see the "infacmd isp Command Reference" chapter in the Informatica 10.2 Command
Reference.
Option Description
blazeJobMonitorURL The host name and port number for the Blaze Job Monitor.
rejDirOnHadoop Enables hadoopRejDir. Used to specify a location to move reject files when you run
mappings.
hadoopRejDir The remote directory where the Data Integration Service moves reject files when you
run mappings. Enable the reject directory using rejDirOnHadoop.
sparkEventLogDir An optional HDFS file path of the directory that the Spark engine uses to log events.
sparkYarnQueueName The YARN scheduler queue name used by the Spark engine that specifies available
resources on a cluster.
The following table describes Hadoop connection options that are renamed in 10.2:
blazeMaxPort cadiMaxPort The maximum value for the port number range
for the Blaze engine.
blazeMinPort cadiMinPort The minimum value for the port number range
for the Blaze engine.
blazeStagingDirectory cadiWorkingDirectory The HDFS file path of the directory that the
Blaze engine uses to store temporary files.
sparkStagingDirectory SparkHDFSStagingDir The HDFS file path of the directory that the
Spark engine uses to store temporary files for
running jobs.
The following table describes Hadoop connection options that are removed from the UI and imported into the
cluster configuration:
Option Description
RMAddress The service within Hadoop that submits requests for resources or
spawns YARN applications.
Imported into the cluster configuration as the property
[Link].
defaultFSURI The URI to access the default Hadoop Distributed File System.
Imported into the cluster configuration as the property
[Link] or [Link].
Option Description
metastoreDatabaseURI* The JDBC connection URI used to access the data store in a local metastore
setup.
remoteMetastoreURI* The metastore URI used to access metadata in a remote metastore setup.
This property is imported into the cluster configuration as the property
[Link].
* These properties are deprecated in 10.2. When you upgrade to 10.2, the property values you set in a previous release
are saved in the repository, but they do not appear in the connection properties.
The following properties are dropped. If they appear in connection strings, they will have no effect:
• hadoopClusterInfoExecutionParametersList
• passThroughSecurityEnabled
• hiverserver2Enabled
• hiveInfoExecutionParametersList
• cadiPassword
• sparkMaster
• sparkDeployMode
HBase Connection
The following table describes HBase connection options that are removed from the connection and imported
into the cluster configuration:
Property Description
ZOOKEEPERPORT Port number of the machine that hosts the ZooKeeper server.
Property Description
defaultFSURI The URI to access the default Hadoop Distributed File System.
jobTrackerURI The service within Hadoop that submits the MapReduce tasks to
specific nodes in the cluster.
hiveWarehouseDirectoryOnHDFS The absolute HDFS file path of the default database for the
warehouse that is local to the cluster.
metastoreDatabaseURI The JDBC connection URI used to access the data store in a local
metastore setup.
Command Description
upgradeExportedObjects Upgrades objects exported to an .xml file from a previous Informatica release to the
current metadata format. The command generates an .xml file that contains the upgraded
objects.
For more information, see the "infacmd mrs Command Reference" chapter in the Informatica 10.2 Command
Reference.
Command Description
For more information, see the "infacmd ms Command Reference" chapter in the Informatica 10.2 Command
Reference.
Command Description
listTasks Lists the Human task instances that meet the filter criteria that you specify.
releaseTask Releases a Human task instance from the current owner, and returns ownership of the task
instance to the business administrator that the workflow configuration identifies.
For more information, see the "infacmd wfs Command Reference" chapter in the Informatica 10.2 Command
Reference.
infasetup Commands
The following table describes changes to infasetup commands:
Command Description
DefineDomain The following options are added for configuring Secure Assertion Markup Language (SAML)
authentication:
- asca. The alias name specified when importing the identity provider assertion signing
certificate into the truststore file used for SAML authentication.
- cst. The allowed time difference between the Active Directory Federation Services (AD FS)
host system clock and the system clock on the master gateway node.
- std. The directory containing the custom truststore file required to use SAML authentication
on gateway nodes within the domain.
- stp. The custom truststore password used for SAML authentication.
DefineGatewayNod The following options are added for configuring SAML authentication:
e - asca. The alias name specified when importing the identity provider assertion signing
certificate into the truststore file used for SAML authentication.
- saml. Enables or disables SAML authentication in the Informatica domain.
- std. The directory containing the custom truststore file required to use SAML authentication
on gateway nodes within the domain.
- stp. The custom truststore password used for SAML authentication.
UpdateGatewayNod The following options are added for configuring SAML authentication.
e - asca. The alias name specified when importing the identity provider assertion signing
certificate into the truststore file used for SAML authentication.
- saml. Enables or disables SAML authentication in the Informatica domain.
- std. The directory containing the custom truststore file required to use SAML authentication
on gateway nodes within the domain.
- stp. The custom truststore password used for SAML authentication.
For more information, see the "infasetup Command Reference" chapter in the Informatica 10.2 Command
Reference.
pmrep Commands
The following table describes new pmrep commands:
Command Description
Command Description
For more information, see the "pmrep Command Reference" chapter in the Informatica 10.2 Command
Reference.
Data Types
This section describes new data type features in 10.2.
Data Types 39
Informatica Data Types
This section describes new data types in the Developer tool.
The following table describes the complex data types you can use in transformations:
array Contains an ordered collection of elements. All elements in the array must be of the same data
type. The elements can be of primitive or complex data type.
map Contains an unordered collection of key-value pairs. The key part must be of primitive data type.
The value part can be of primitive or complex data type.
struct Contains a collection of elements of different data types. The elements can be of primitive or
complex data types.
For more information, see the "Data Type Reference" appendix in the Informatica Big Data Management 10.2
User Guide.
Documentation
This section describes new or updated guides in 10.2.
Effective in version 10.2, the Informatica Big Data Management Security Guide is renamed to Informatica
Big Data Management Administrator Guide. It contains the security information and additional
administrator tasks for Big Data Management.
For more information see the Informatica Big Data Management 10.2 Administrator Guide.
Effective in version 10.2, the Informatica Big Data Management Installation and Upgrade Guide is
renamed to Informatica Big Data Management Hadoop Integration Guide. Effective in version 10.2, the
Data Integration Service can automatically install the Big Data Management binaries to the Hadoop
cluster to integrate the domain with the cluster. The integration tasks in the guide do not include
installation of the distribution package.
For more information see the Informatica Big Data Management 10.2 Hadoop Integration Guide.
Effective in version 10.2, the Informatica Live Data Map Administrator Guide is renamed to Informatica
Catalog Administrator Guide.
For more information, see the Informatica Catalog Administrator Guide 10.2.
Effective in version 10.2, the Informatica Administrator Reference for Live Data Map is renamed to
Informatica Administrator Reference for Enterprise Information Catalog.
For more information, see the Informatica Administrator Reference for Enterprise Information Catalog
10.2.
Effective in version 10.2, you can ingest custom metadata into the catalog using Enterprise Information
Catalog. You can see the new guide Informatica Enterprise Information Catalog 10.2 Custom Metadata
Integration Guide for more information.
Effective in version 10.2, the Informatica Live Data Map Installation and Configuration Guide is renamed
to Informatica Enterprise Information Catalog Installation and Configuration Guide.
For more information, see the Informatica Enterprise Information Catalog 10.2 Installation and
Configuration Guide.
Effective in version 10.2, you can use REST APIs exposed by Enterprise Information Catalog. You can see
the new guide Informatica Enterprise Information Catalog 10.2 REST API Reference for more information.
Effective in version 10.2, the Informatica Live Data Map Upgrading from version <x> is renamed to
Informatica Enterprise Information Catalog Upgrading from versions 10.1, 10.1.1, 10.1.1 HF1, and 10.1.1
Update 2.
For more information, see the Informatica Enterprise Information Catalog Upgrading from versions 10.1,
10.1.1, 10.1.1 HF1, and 10.1.1 Update 2 guide..
You can create resources in Informatica Catalog Administrator to extract metadata from the following data
sources:
Apache Atlas
Informatica Axon
For more information about new resources, see the Informatica Catalog Administrator Guide 10.2.
Custom metadata is metadata that you define. You can define a custom model, create a custom resource
type, and create a custom resource to ingest custom metadata from a custom data source. You can use
custom metadata integration to extract and ingest metadata from custom data sources for which Enterprise
Information Catalog does not provide a model.
For more information about custom metadata integration, see the Informatica Enterprise Information Catalog
10.2 Custom Metadata Integration Guide.
REST APIs
Effective in version 10.2, you can use Informatica Enterprise Information Catalog REST APIs to access and
configure features related to the objects and models associated with a data source.
The REST APIs allow you to retrieve information related to objects and models associated with a data source.
In addition, you can create, update, or delete entities related to models and objects such as attributes,
associations, and classes.
For more information about unstructured file sources, see the Informatica Enterprise Information Catalog 10.2
REST API Reference.
You can view composite data domains for tabular assets in the Asset Details view after you create and
enable composite data domain discovery for resources in the Catalog Administrator. You can also search for
composite data domains and view details of the composite data domains in the Asset Details view.
For more information about composite data domains, see the "View Assets" chapter in the Informatica
Enterprise Information Catalog 10.2 User Guide and see the "Catalog Administrator Concepts" and "Managing
Composite Data Domains" chapters in the Informatica Catalog Administrator Guide 10.2.
Data Domains
This section describes new features related to data domains in Enterprise Information Catalog.
• Use reference tables, rules, and regular expressions to create a data rule or column rule.
• Use minimum conformance percentage or minimum conforming rows for data domain match.
For more information about data domains and resources, see the "Managing Resources" chapter in the
Informatica Catalog Administrator Guide 10.2.
For more information about privileges see the "Privileges and Roles" chapter in the Informatica Administrator
Reference for Enterprise Information Catalog 10.2.
For more information about data domain curation, see the "View Assets" chapter in the Informatica Enterprise
Information Catalog 10.2 User Guide.
For more information about export and import of custom attributes, see the "View Assets" chapter in the
Informatica Enterprise Information Catalog 10.2 User Guide.
For more information about assigning custom attribute values to an asset, see the "View Assets" chapter in
the Informatica Enterprise Information Catalog 10.2 User Guide.
Transformation Logic
Effective in version 10.2, you can view transformation logic for assets in the Lineage and Impact view. The
Lineage and Impact view displays transformation logic for assets that contain transformations. The
transformation view displays transformation logic for data structures, such as tables and columns. The view
also displays various types of transformations, such as filter, joiner, lookup, expression, sorter, union, and
aggregate.
For more information about transformation logic, see the "View Lineage and Impact" chapter in the
Informatica Enterprise Information Catalog 10.2 User Guide.
For more information about unstructured file types, see the "Managing Resources" chapter in the Informatica
Catalog Administrator Guide 10.2.
Value Frequency
Configure and View Value Frequency
Effective in version 10.2, you can enable value frequency along with column data similarity in the Catalog
Administrator to compute the frequency of values in a data source. You can view the value frequency for view
column, table column, CSV field, XML file field, and JSON file data assets in the Asset Details view after you
run the value frequency on a data source in the Catalog Administrator.
For more information about configuring value frequency, see the "Catalog Administrator Concepts" chapter in
the Informatica Catalog Administrator Guide 10.2 . To view value frequency for a data asset, see the "View
Assets" chapter in the Informatica Enterprise Information Catalog 10.2 User Guide.
For more information about permissions and privileges, see the "Permissions Overview" and "Privileges and
Roles Overview" chapter in the Informatica Administrator Reference for Enterprise Information Catalog 10.2 .
For more information, see the "Create the Application Services" chapter in the Informatica Enterprise
Information Catalog 10.2 Installation and Configuration Guide.
Informatica Analyst
This section describes new Analyst tool features in 10.2.
Rule Specification
Effective in version 10.2, you can configure a rule specification in the Analyst tool and use the rule
specification in the column profile.
For more information about using rule specifications in the column profiles, see the "Rules in Informatica
Analyst" chapter in the Informatica 10.2 Data Discovery Guide.
Intelligent Data Lake uses Apache Zeppelin to view the worksheets in the form of a visualization Notebook
that contains graphs and charts. For more details about Apache Zeppelin, see Apache Zeppelin
documentation. When you visualize data using Zeppelin's capabilities, you can view relationships between
different columns and create multiple charts and graphs.
When you open the visualization Notebook for the first time after a data asset is published, Intelligent Data
Lake uses CLAIRE engine to create Smart Visualization suggestions in the form of histograms of the numeric
columns created by the user.
For more information about the visualization notebook, see the "Validate and Assess Data Using
Visualization with Apache Zeppelin" chapter in the Informatica Intelligent Data Lake 10.2 User Guide.
For more information, see the "Discover Data" chapter in the Informatica Intelligent Data Lake 10.2 User Guide.
For more information, see the "Prepare Data" chapter in the Informatica Intelligent Data Lake 10.2 User Guide.
For more information, see the "Prepare Data" chapter in the Informatica Intelligent Data Lake 10.2 User Guide.
For more information, see the "Discover Data" chapter in the Informatica Intelligent Data Lake 10.2 User Guide.
For more information, see the "Prepare Data" chapter in the Informatica Intelligent Data Lake 10.2 User Guide.
For more information, see the "Prepare Data" chapter in the Informatica Intelligent Data Lake User Guide.
For more information, see the "Prepare Data" chapter in the Informatica Intelligent Data Lake 10.2 User Guide.
Informatica Developer
This section describes new Developer tool features in 10.2.
For more information, see the "Physical Data Objects" chapter in the Informatica 10.2 Developer Tool Guide.
Profiles
This section describes new features for profiles and scorecards.
Rule Specification
Effective in version 10.2, you can use rule specifications when you create a column profile in the Developer
tool. To use the rule specification, generate a mapplet from the rule specification and validate the mapplet as
a rule.
For more information about using rule specifications in the column profiles, see the "Rules in Informatica
Developer" chapter in the Informatica 10.2 Data Discovery Guide.
Informatica Installation
This section describes new installation features in 10.2.
For more information about the upgrade advisor, see the Informatica Upgrade Guides.
Intelligent Streaming
This section describes new Intelligent Streaming features in 10.2.
Informatica Developer 47
CSV Format
Effective in version 10.2, Streaming mappings can read and write data in CSV format.
For more information about the CSV format, see the "Sources and Targets in a Streaming Mapping" chapter in
the Informatica Intelligent Streaming 10.2 User Guide.
Data Types
Effective in version 10.2, Streaming mappings can read, process, and write hierarchical data. You can use
array, struct, and map complex data types to process the hierarchical data.
For more information, see the "Sources and Targets in a Streaming Mapping" chapter in the Informatica
Intelligent Streaming 10.2 User Guide.
Connections
Effective in version 10.2, you can use the following new messaging connections in Streaming mappings:
• AmazonKinesis. Access Amazon Kinesis Stream as source or Amazon Kinesis Firehose as target. You can
create and manage an AmazonKinesis connection in the Developer tool or through infacmd.
• MapRStreams. Access MapRStreams as targets. You can create and manage a MapRStreams connection
in the Developer tool or through infacmd.
For more information, see the "Connections" chapter in the Informatica Intelligent Streaming 10.2 User Guide.
Pass-Through Mappings
Effective in version 10.2, you can pass any payload format directly from source to target in Streaming
mappings.
You can project columns in binary format to pass a payload from source to target in its original form or to
pass a payload format that is not supported.
For more information, see the "Sources and Targets in a Streaming Mapping" chapter in the Informatica
Intelligent Streaming 10.2 User Guide.
• AmazonKinesis. Represents data in a Amazon Kinesis Stream or Amazon Kinesis Firehose Delivery
Stream.
• MapRStreams. Represents data in a MapR Stream.
For more information, see the "Sources and Targets in a Streaming Mapping" chapter in the Informatica
Intelligent Streaming 10.2 User Guide.
Transformation Support
Effective in version 10.2, you can use the Rank transformation with restrictions in Streaming mappings.
For more information, see the "Intelligent Streaming Mappings" chapter in the Informatica Intelligent
Streaming 10.2 User Guide.
Cloudera Navigator
Effective in version 10.2, you can provide the truststore file information to enable a secure connection to a
Cloudera Navigator resource. When you create or edit a Cloudera Navigator resource, enter the path and file
name of the truststore file for the Cloudera Navigator SSL instance and the password of the truststore file.
For more information about creating a Cloudera Navigator Resource, see the "Database Management
Resources" chapter in the Informatica Metadata Manager 10.2 Administrator Guide.
PowerCenter
This section describes new PowerCenter features in 10.2.
Audit Logs
Effective in version 10.2, you can generate audit logs when you import an .xml file into the PowerCenter
repository. When you import one or more repository objects, you can generate audit logs. You can enable
Security Audit Trail configuration option in the PowerCenter Repository Service properties in the
Administrator tool to generate audit logs when you import an .xml file into the PowerCenter repository. The
user activity logs captures all the audit messages.
The audit logs contain the following information about the file, such as the file name and size, the number of
objects imported, and the time of the import operation.
For more information, see the "pmrep Command Reference" chapter in the Informatica 10.2 Command
Reference, the Informatica 10.2 Application Service Guide, and the Informatica 10.2 Administrator Guide.
For more information, see the "Working with Targets" chapter in the Informatica 10.2 PowerCenter Designer
Guide.
Object Queries
Effective in version 10.2, you can create and delete object queries with the pmrep commands.
For more information, see the "pmrep Command Reference" chapter in the Informatica 10.2 Command
Reference.
You can also update a connection with or without a parameter in password with the pmrep command.
For more information, see the "pmrep Command Reference" chapter in the Informatica 10.2 Command
Reference.
Metadata Manager 49
PowerExchange Adapters
This section describes new PowerExchange adapter features in 10.2.
• You can read data from or write data to the Amazon S3 buckets in the following regions:
- Asia Pacific (Mumbai)
- Canada (Central)
- China(Beijing)
- EU (London)
- US East (Ohio)
• You can run Amazon Redshift mappings on the Spark engine. When you run the mapping, the Data
Integration Service pushes the mapping to a Hadoop cluster and processes the mapping on the Spark
engine, which significantly increases the performance.
• You can use AWS Identity and Access Management (IAM) authentication to securely control access to
Amazon S3 resources.
• You can connect to Amazon Redshift Clusters available in Virtual Private Cloud (VPC) through VPC
endpoints.
• You can use AWS Identity and Access Management (IAM) authentication to run a session on the EMR
cluster.
For more information, see the Informatica PowerExchange for Amazon Redshift 10.2 User Guide.
• You can read data from or write data to the Amazon S3 buckets in the following regions:
- Asia Pacific (Mumbai)
- Canada (Central)
- China (Beijing)
- EU (London)
- US East (Ohio)
Deflate No Yes
Snappy No Yes
• You can select the type of source from which you want to read data in the Source Type option under the
advanced properties for an Amazon S3 data object read operation. You can select Directory or File source
types.
• You can select the type of the data sources in the Resource Format option under the Amazon S3 data
objects properties. You can read data from the following source formats:
- Binary
- Flat
- Avro
- Parquet
• You can connect to Amazon S3 buckets available in Virtual Private Cloud (VPC) through VPC endpoints.
• You can run Amazon S3 mappings on the Spark engine. When you run the mapping, the Data Integration
Service pushes the mapping to a Hadoop cluster and processes the mapping on the Spark engine.
• You can choose to overwrite the existing files. You can select the Overwrite File(s) If Exists option in the
Amazon S3 data object write operation properties to overwrite the existing files.
• You can use AWS Identity and Access Management (IAM) authentication to securely control access to
Amazon S3 resources.
• You can filter the metadata to optimize the search performance in the Object Explorer view.
• You can use AWS Identity and Access Management (IAM) authentication to run a session on the EMR
cluster.
For more information, see the Informatica PowerExchange for Amazon S3 10.2 User Guide.
• You can use PowerExchange for HBase to read from sources and write to targets stored in the WASB file
system on Azure HDInsight.
• You can associate a cluster configuration with an HBase connection. A cluster configuration is an object
in the domain that contains configuration information about the Hadoop cluster. The cluster configuration
enables the Data Integration Service to push mapping logic to the Hadoop environment.
For more information, see the Informatica PowerExchange for HBase 10.2 User Guide.
PowerExchange Adapters 51
PowerExchange for HDFS
Effective in version 10.2, you can associate a cluster configuration with an HDFS connection. A cluster
configuration is an object in the domain that contains configuration information about the Hadoop cluster.
The cluster configuration enables the Data Integration Service to push mapping logic to the Hadoop
environment.
For more information, see the Informatica PowerExchange for HDFS 10.2 User Guide.
For more information, see the Informatica PowerExchange for Hive 10.2 User Guide.
• You can run MapR-DB mappings on the Spark engine. When you run the mapping, the Data Integration
Service pushes the mapping to a Hadoop cluster and processes the mapping on the Spark engine, which
significantly increases the performance.
• You can configure dynamic partitioning for MapR-DB mappings that you run on the Spark engine.
• You can associate a cluster configuration with an HBase connection for MapR-DB. A cluster configuration
is an object in the domain that contains configuration information about the Hadoop cluster. The cluster
configuration enables the Data Integration Service to push mapping logic to the Hadoop environment.
For more information, see the Informatica PowerExchange for MapR-DB 10.2 User Guide.
For more information, see the Informatica PowerExchange for Microsoft Azure Blob Storage 10.2 User Guide.
For more information, see the Informatica PowerExchange for Microsoft Azure SQL Data Warehouse 10.2 User
Guide.
For more information, see the Informatica PowerExchange for Salesforce 10.2 User Guide.
• You can read data from or write data to the China (Beijing) region.
• When you import objects from AmazonRSCloudAdapter in the PowerCenter Designer, the PowerCenter
Integration Service lists the table names alphabetically.
• In addition to the existing recovery options in the vacuum table, you can select the Reindex option to
analyze the distribution of the values in an interleaved sort key column.
• You can configure the multipart upload option to upload a single object as a set of independent parts.
TransferManager API uploads the multiple parts of a single object to Amazon S3. After uploading,
Amazon S3 assembles the parts and creates the whole object. TransferManager API uses the multipart
uploads option to achieve performance and increase throughput when the content size of the data is large
and the bandwidth is high.
You can configure the Part Size and TransferManager Thread Pool Size options in the target session
properties.
• PowerExchange for Amazon Redshift uses the [Link] file to address potential security
issues when accessing properties. The following is the location of the [Link] file:
<Informatica installation directory>server/bin/javalib/505100/commons-
[Link]
For more information, see the Informatica PowerExchange for Amazon Redshift 10.2 User Guide for
PowerCenter.
• You can read data from or write data to the China (Beijing) region.
• You can read multiple files from Amazon S3 and write data to a target.
• You can write multiple files to Amazon S3 target from a single source. You can configure the Distribution
Column options in the target session properties.
• When you create a mapping task to write data to Amazon S3 targets, you can configure partitions to
improve performance. You can configure the Merge Partition Files option in the target session properties.
• You can specify a directory path that is available on the PowerCenter Integration Service in the Staging
File Location property.
• You can configure the multipart upload option to upload a single object as a set of independent parts.
TransferManager API uploads the multiple parts of a single object to Amazon S3. After uploading,
Amazon S3 assembles the parts and creates the whole object. TransferManager API uses the multipart
uploads option to achieve performance and increase throughput when the content size of the data is large
and the bandwidth is high.
You can configure the Part Size and TransferManager Thread Pool Size options in the target session
properties.
For more information, see the Informatica PowerExchange for Amazon S3 version 10.2 User Guide for
PowerCenter.
• Add row reject reason. Select to include the reason for rejection of rows to the reject file.
PowerExchange Adapters 53
• Alternate Key Name. Indicates whether the column is an alternate key for an entity. Specify the name of
the alternate key. You can use alternate key in update and upsert operations.
• You can configure PowerExchange for Microsoft Dynamics CRM to run on AIX platform.
For more information, see the Informatica PowerExchange for Microsoft Dynamics CRM 10.2 User Guide for
PowerCenter.
• When you run ABAP mappings to read data from SAP tables, you can use the STRING, SSTRING, and
RAWSTRING data types. The SSTRING data type is represented as SSTR in PowerCenter.
• When you read or write data through IDocs, you can use the SSTRING data type.
• When you run ABAP mappings to read data from SAP tables, you can configure HTTP streaming.
For more information, see the Informatica PowerExchange for SAP NetWeaver 10.2 User Guide for
PowerCenter.
Rule Specifications
Effective in version 10.2, you can select a rule specification from the Model repository in Informatica
Developer and add the rule specification to a mapping. You can also deploy a rule specification as a web
service.
A rule specification is a read-only object in the Developer tool. Add a rule specification to a mapping in the
same way that you add a mapplet to a mapping. You can continue to select a mapplet that you generated
from a rule specification and add the mapplet to a mapping.
Add a rule specification to a mapping when you want the mapping to apply the logic that the current rule
specification represents. Add the corresponding mapplet to a mapping when you want to use or update the
mapplet logic independently of the rule specification.
When you add a rule specification to a mapping, you can specify the type of outputs on the rule specification.
By default, a rule specification has a single output port that contains the final result of the rule specification
analysis for each input data row. You can configure the rule specification to create an output port for every
rule set in the rule specification.
For more information, see the "Mapplets" chapter in the Informatica 10.2 Developer Mapping Guide.
Security
This section describes new security features in 10.2.
The user activity data includes the following properties for each login attempt from an Informatica client:
• Application name
• Application version
• Host name or IP address of the application host
If the client set custom properties on login requests, the data includes the custom properties.
For more information, see the "Users and Groups" chapter in the Informatica 10.2 Security Guide.
Transformation Language
This section describes new transformation language features in 10.2.
Complex Functions
Effective in version 10.2, the transformation language introduces complex functions for complex data types.
Use complex functions to process hierarchical data on the Spark engine.
• ARRAY
• CAST
• COLLECT_LIST
• CONCAT_ARRAY
• RESPEC
• SIZE
• STRUCT
• STRUCT_AS
For more information about complex functions, see the "Functions" chapter in the Informatica 10.2 Developer
Transformation Language Reference.
Complex Operators
Effective in version 10.2, the transformation language introduces complex operators for complex data types.
In mappings that run on the Spark engine, use complex operators to access elements of hierarchical data.
• Subscript operator [ ]
• Dot operator .
Transformation Language 55
For more information about complex operators, see the "Operators" chapter in the Informatica 10.2 Developer
Transformation Language Reference.
Window Functions
Effective in version 10.2, the transformation language introduces window functions. Use window functions to
process a small subset of a larger set of data on the Spark engine.
• LEAD. Provides access to a row at a given physical offset that comes after the current row.
• LAG. Provides access to a row at a given physical offset that comes before the current row.
For more information, see the "Functions" chapter in the Informatica 10.2 Transformation Language
Reference.
Transformations
This section describes new transformation features in version 10.2.
Informatica Transformations
This section describes new features in Informatica transformations in 10.2.
The Address Validator transformation contains additional address functionality for the following countries:
Austria
Effective in version 10.2, you can configure the Address Validator transformation to return a postal address
code identifier for a mailbox that has two valid street addresses. For example, a building at an intersection of
two streets might have an address on both streets. The building might prefer to receive mail at one of the
addresses. The other address remains a valid address, but the postal carrier does not use it to deliver mail.
Austria Post assigns a postal address code to both addresses. Austria Post additionally assigns a postal
address code identifier to the address that does not receive mail. The postal address code identifier is
identical to the postal address code of the preferred address. You can use the postal address code identifier
to look up the preferred address with the Address Validator transformation.
To find the postal address code identifier for an address in Austria, select the Postal Address Code Identifier
AT output port. Find the port in the AT Supplementary port group.
To find the address that a postal address identifier represents, select the Postal Address Code Identifier AT
input port. Find the port in the Discrete port group.
Czech Republic
Effective in version 10.2, you can configure the Address Validator transformation to add RUIAN ID values to a
valid Czech Republic address.
Hong Kong
The Address Validator transformation includes the following features for Hong Kong:
Effective in version 10.2, the Address Validator transformation can read and write Hong Kong addresses
in Chinese or in English.
Use the Preferred Language property to select the preferred language for the addresses that the
transformation returns. The default language is Chinese. To return Hong Kong addresses in English,
update the property to ENGLISH.
Use the Preferred Script property to select the preferred character set for the address data. The default
character set is Hanzi. To return Hong Kong addresses in Latin characters, update the property to a Latin
or ASCII option. When you select a Latin script, address validation transliterates the address data into
Pinyin.
Effective in version 10.2, you can configure the Address Validator transformation to return valid
suggestions for a Hong Kong address that you enter on a single line. To return the suggestions,
configure the transformation to run in suggestion list mode.
Submit the address in the native Chinese language and in the Hanzi script. The Address Validator
transformation reads the address in the Hanzi script and returns the address suggestions in the Hanzi
script.
Submit a Hong Kong address in the following format:
[Province] [Locality] [Street] [House Number] [Building 1] [Building 2] [Sub-
building]
When you submit a partial address, the transformation returns one or more address suggestions for the
address that you enter. When you enter a complete or almost complete address, the transformation
returns a single suggestion for the address that you enter.
Macau
The Address Validator transformation includes the following features for Macau:
Effective in version 10.2, the Address Validator transformation can read and write Macau addresses in
Chinese or in Portuguese.
Transformations 57
Use the Preferred Language property to select the preferred language for the addresses that the
transformation returns. The default language is Chinese. To return Macau addresses in Portuguese,
update the property to ALTERNATIVE_2.
Use the Preferred Script property to select the preferred character set for the address data. The default
character set is Hanzi. To return Macau addresses in Latin characters, update the property to a Latin or
ASCII option.
Note: When you select a Latin script with the default preferred language option, address validation
transliterates the Chinese address data into Cantonese or Mandarin. When you select a Latin script with
the ALTERNATIVE_2 preferred language option, address validation returns the address in Portuguese.
Single-line address verification for native Macau addresses in suggestion list mode
Effective in version 10.2, you can configure the Address Validator transformation to return valid
suggestions for a Macau address that you enter on a single line in suggestion list mode. When you enter
a partial address in suggestion list mode, the transformation returns one or more address suggestions
for the address that you enter. Submit the address in the Chinese language and in the Hanzi script. The
transformation returns address suggestions in the Chinese language and in the Hanzi script. Enter a
Macau address in the following format:
[Locality] [Street] [House Number] [Building]
Use the Preferred Language property to select the preferred language for the addresses. The default
preferred language is Chinese. Use the Preferred Script property to select the preferred character set for
the address data. The default preferred script is Hanzi. To verify single-line addresses, enter the
addresses in the Complete Address port.
Taiwan
Effective in version 10.2, you can configure the Address Validator transformation to return a Taiwan address
in the Chinese language or the English language.
Use the Preferred Language property to select the preferred language for the addresses that the
transformation returns. The default language is traditional Chinese. To return Taiwan addresses in English,
update the property to ENGLISH.
Use the Preferred Script property to select the preferred character set for the address data. The default
character set is Hanzi. To return Taiwan addresses in Latin characters, update the property to a Latin or ASCII
option.
Note: The Taiwan address structure in the native script lists all address elements in a single line. You can
submit the address as a single string in a Formatted Address Line port.
When you format an input address, enter the elements in the address in the following order:
Postal Code, Locality, Dependent Locality, Street, Dependent Street, House or Building
Number, Building Name, Sub-Building Name
United States
The Address Validator transformation includes the following features for the United States:
Support for the Secure Hash Algorithm-compliant versions of CASS data files
Effective in version 10.2, the Address Validator transformation reads CASS certification data files that
comply with the SHA-256 standard.
The current CASS certification files are numbered [Link] through [Link]. To verify United
States addresses in certified mode, you must use the current files.
Note: The SHA-256-compliant files are not compatible with older versions of Informatica.
Effective in version 10.2, you can configure the Address Validator transformation to identify United
States addresses that do not provide a door or entry point for a mail carrier. The mail carrier might be
unable to deliver a large item to the address.
The United States Postal Service maintains a list of addresses for which a mailbox is accessible but for
which a physical entrance is inaccessible. For example, a residence might locate a mailbox outside a
locked gate or on a rural route. The address reference data includes the list of inaccessible addresses
that the USPS recognizes. Address validation can return the accessible status of an address when you
verify the address in certified mode.
To identify DNA addresses, select the Delivery Point Validation Door not Accessible port. Find the port in
the US Specific port group.
Effective in version 10.2, you can configure the Address Validator transformation to identify United
States addresses that do not provide a secure mailbox or reception point for mail. The mail carrier might
be unable to deliver a large item to the address.
The United States Postal Service maintains a list of addresses at which the mailbox is not secure. For
example, a retail store is not a secure location if the mail carrier can enter the store but cannot find a
mailbox or an employee to receive the mail. The address reference data includes the list of non-secure
addresses that the USPS recognizes. Address validation can return the non-secure status of an address
when you verify the address in certified mode.
To identify DNA addresses, select the Delivery Point Validation No Secure Location port. Find the port in
the US Specific port group.
Effective in version 10.2, you can configure the Address Validator transformation to identify ZIP Codes
that contain post office box addresses and no other addresses. When all of the addresses in a ZIP Code
are post office box addresses, the ZIP Code represents a Post Office Box Only Delivery Zone.
The Address Validator transformation adds the value Y to an address to indicate that it contains a ZIP
Code in a Post Office Box Only Delivery Zone. The value enables the postal carrier to sort mail more
easily. For example, the mailboxes in a Post Office Box Only Delivery Zone might reside in a single post
office building. The postal carrier can deliver all mail to the Post Office Box Only Delivery Zone in a single
trip.
To identify Post Office Box Only Delivery Zones, select the Post Office Box Delivery Zone Indicator port.
Find the port in the US Specific port group.
For more information, see the Informatica 10.2 Developer Transformation Guide and the Informatica 10.2
Address Validator Port Reference.
JsonStreamer
Use the JsonStreamer object in a Data Processor transformation to process large JSON files. The
transformation splits very large JSON files into complete JSON messages. The transformation can then call
other Data Processor transformation components, or a Hierarchical to Relational transformation, to complete
the processing.
For more information, see the "Streamers" chapter in the Informatica Data Transformation 10.2 User Guide.
Transformations 59
RunPCWebService
Use the RunPCWebService action to call a PowerCenter mapplet from within a Data Processor
transformation.
For more information, see the "Actions" chapter in the Informatica Data Transformation 10.2 User Guide.
PowerCenter Transformations
Evaluate Expression
Effective in version 10.2, you can evaluate expressions that you configure in the Expression Editor of an
Expression transformation. When you test an expression, you can enter sample data and then evaluate the
expression.
For more information about evaluating an expression, see the "Working with Transformations" chapter and
the "Expression Transformation" chapter in the Informatica PowerCenter 10.2 Transformation Guide.
Workflows
This section describes new workflow features in version 10.2.
Informatica Workflows
This section describes new features in Informatica workflows in 10.2.
The table identifies the users or groups who can work on the task instances and specifies the column values
to associate with each user or group. You can update the table independently of the workflow configuration,
for example as users join or leave the project. When the workflow runs, the Data Integration Service uses the
current information in the table to assign task instances to users or groups.
You can also specify a range of numeric values or date values when you associate users or groups with the
values in a source data column. When one or more records contain a value in a range that you specify, the
Data Integration Service assigns the task instance to a user or group that you specify.
For more information, see the "Human Task" chapter in the Informatica 10.2 Developer Workflow Guide.
A Human task can send email notifications when the Human task completes in the workflow and when a task
instance that the Human task defines changes status. To configure notifications for a Human task, update
the Notifications properties on the Human task in the workflow. To configure notifications for a task
When you configure notifications for a Human task instance, you can select an option to notify the task
instance owner in addition to any recipient that you specify. The option applies when a single user owns the
task instance. When you select the option to notify the task instance owner, you can optionally leave the
Recipients field empty
For more information, see the "Human Task" chapter in the Informatica 10.2 Developer Workflow Guide.
Multiple pipelines within a mapping are imported as separate mappings into the Model repository based on
the target load order. If a workflow contains a session that runs a mapping with multiple pipelines, the import
process creates a separate Model repository mapping and mapping task for each pipeline in the PowerCenter
mapping to preserve the target load order.
For more information about importing from PowerCenter, see the "Import from PowerCenter" chapter in the
Informatica 10.2 Developer Mapping Guide and the "Workflows" chapter in the Informatica 10.2 Developer
Workflow Guide.
Workflows 61
Chapter 3
Changes (10.2)
This chapter includes the following topics:
• Support Changes, 62
• Application Services, 65
• Big Data, 66
• Command Line Programs, 72
• Enterprise Information Catalog, 73
• Informatica Analyst, 73
• Intelligent Streaming, 73
• PowerExchange Adapters, 74
• Security, 76
• Transformations, 76
• Workflows, 77
Support Changes
This section describes the support changes in 10.2.
62
Big Data Hadoop Distribution Support
Informatica big data products support a variety of Hadoop distributions. In each release, Informatica adds,
defers, and drops support for Hadoop distribution versions. Informatica might reinstate support for deferred
versions in a future release.
The following table lists the supported Hadoop distribution versions for Informatica 10.2 big data products:
Big Data 5.4 3.6 5.10, 5.11, 2.4, 2.5, 2.6 4.2 5.2 MEP
Management 5.12, 5.13 2.0
5.2 MEP
3.0
Intelligent Data 5.4 3.6 5.11, 5.12 2.6 4.2 5.2 MEP
Lake 2.0
To see a list of the latest supported versions, see the Product Availability Matrix on the Informatica Customer
Portal: [Link]
Support Changes 63
Hadoop Supported 10.2 Changes
Distribution Distribution
Versions
MapR 5.2 MEP 2.0.x Added support for versions 5.2 MEP 2.0 and 5.2 MEP 3.0.
5.2 MEP 3.0.x Dropped support for version 5.2 MEP 1.0.
Informatica big data products support a variety of Hadoop distributions. In each release, Informatica adds,
defers, and drops support for Hadoop distribution versions. Informatica might reinstate support for deferred
versions in a future release.
To see a list of the latest supported versions, see the Product Availability Matrix on the Informatica network:
[Link]
Hortonworks HDP 2.5.x (Kerberos version), 2.6.x (Non Kerberos Added support for 2.6 non-Kerberos version.
version)
Cloudera CDH 5.10 Added support for version 5.10 and 5.12.
5.11 Dropped support for version 5.8.
5.12 Deferred support for version 5.9.
MapR 5.2 MEP 2.0 Added support for version 5.2 MEP 2.0.
To see a list of the latest supported versions, see the Product Availability Matrix on the Informatica network:
[Link]
Metadata Manager
Custom Metadata Configurator (Deprecated)
Effective in version 10.2, Informatica deprecated the Custom Metadata Configurator in Metadata Manager.
You can use the load template to load metadata from metadata source files into a custom resource. Create a
load template for the models that use Custom Metadata Configurator templates.
For more information about using load templates, see the "Custom XConnect Created with a Load Template"
in the Informatica Metadata Manager 10.2 Custom Metadata Integration Guide.
Application Services
This section describes changes to Application Services in 10.2.
Application Services 65
Content Management Service
Effective in version 10.2, you do not need to update the search index on the Model repository before you run
the infacmd cms purge command. The infacmd cms purge command updates the search index before it
purges unused tables from the reference data warehouse.
Previously, you updated the search index before you ran the command so that the Model repository held an
up-to-date list of reference tables. The Content Management Service used the list of objects in the index to
select the tables to delete.
For more information, see the "Content Management Service" chapter in the Informatica 10.2 Application
Service Guide.
Execution Options
Effective in version 10.2, you configure the following execution options on the Properties view for the Data
Integration Service:
• Maximum On-Demand Execution Pool Size. Controls the number of on-demand jobs that can run
concurrently. Jobs include data previews, profiling jobs, REST and SQL queries, web service requests, and
mappings run from the Developer tool.
• Maximum Native Batch Execution Pool Size. Controls the number of deployed native jobs that each Data
Integration Service process can run concurrently.
• Maximum Hadoop Batch Execution Pool Size. Controls the number of deployed Hadoop jobs that can run
concurrently.
Previously, you configured the Maximum Execution Pool Size property to control the maximum number of
jobs the Data Integration Service process could run concurrently.
When you upgrade to 10.2, the value of the maximum execution pool size upgrades to the following
properties:
• Maximum On-Demand Batch Execution Pool Size. Inherits the value of the Maximum Execution Pool Size
property.
• Maximum Native Batch Execution Pool Size. Inherits the value of the Maximum Execution Pool Size
property.
• Maximum Hadoop Batch Execution Pool Size. Inherits the value of the Maximum Execution Pool size
property if the original value has been changed from 10. If the value is 10, the Hadoop batch pool retains
the default size of 100.
For more information, see the "Data Integration Service" chapter in the Informatica 10.2 Application Service
Guide.
Big Data
This section describes the changes to big data in 10.2.
You can use the following properties to configure your Hadoop connection:
Property Description
Cluster Configuration The name of the cluster configuration associated with the Hadoop
environment.
Appears in General Properties.
Write Reject Files to Hadoop Select the property to move the reject files to the HDFS location listed
in the property Reject File Directory when you run mappings.
Appears in Reject Directory Properties.
Reject File Directory The directory for Hadoop mapping files on HDFS when you run
mappings.
Appears in Reject Directory Properties
Blaze Job Monitor Address The host name and port number for the Blaze Job Monitor.
Appears in Blaze Configuration.
YARN Queue Name The YARN scheduler queue name used by the Spark engine that
specifies available resources on a cluster.
Appears in Blaze Configuration.
Hive Staging Database Name Database Name Namespace for Hive staging tables.
Appears in Common Properties.
Previously appeared in Hive Properties.
Blaze Staging Directory Temporary Working Directory on The HDFS file path of the directory that the
HDFS Blaze engine uses to store temporary files.
CadiWorkingDirectory Appears in Blaze Configuration.
Blaze User Name Blaze Service User Name The owner of the Blaze service and Blaze
CadiUserName service logs.
Appears in Blaze Configuration.
Big Data 67
Current Name Previous Name Description
YARN Queue Name Yarn Queue Name The YARN scheduler queue name used by the
CadiAppYarnQueueName Blaze engine that specifies available resources
on a cluster.
Appears in Blaze Configuration.
BlazeMaxPort CadiMaxPort The maximum value for the port number range
for the Blaze engine.
BlazeMinPort CadiMinPort The minimum value for the port number range
for the Blaze engine.
Spark Staging Directory Spark HDFS Staging Directory The HDFS file path of the directory that the
Spark engine uses to store temporary files for
running jobs.
Effective in version 10.2, the following properties are removed from the connection and imported into the
cluster configuration:
Property Description
Resource Manager Address The service within Hadoop that submits requests for resources or
spawns YARN applications.
Imported into the cluster configuration as the property
[Link].
Previously appeared in Hadoop Cluster Properties.
Default File System URI The URI to access the default Hadoop Distributed File System.
Imported into the cluster configuration as the property
[Link] or [Link].
Previously appeared in Hadoop Cluster Properties.
Effective in version 10.2, the following properties are deprecated and are removed from the connection:
Property Description
Metastore Database URI* The JDBC connection URI used to access the data store in a local
metastore setup.
Previously appeared in Hive Configuration.
Metastore Database Driver* Driver class name for the JDBC data store.
Previously appeared in Hive Configuration.
Metastore Database Password* The password for the metastore user name.
Previously appeared in Hive Configuration.
Remote Metastore URI* The metastore URI used to access metadata in a remote metastore
setup.
This property is imported into the cluster configuration as the
property [Link].
Previously appeared in Hive Configuration.
Job Monitoring URL The URL for the MapReduce JobHistory server.
Previously appeared in Hive Configuration.
* These properties are deprecated in 10.2. When you upgrade to 10.2, the property values that you set in a previous
release are saved in the repository, but they do not appear in the connection properties.
Property Description
ZooKeeper Host(s) Name of the machine that hosts the ZooKeeper server.
ZooKeeper Port Port number of the machine that hosts the ZooKeeper server.
Enable Kerberos Connection Enables the Informatica domain to communicate with the HBase
master server or region server that uses Kerberos authentication.
HBase Master Principal Service Principal Name (SPN) of the HBase master server.
HBase Region Server Principal Service Principal Name (SPN) of the HBase region server.
• You cannot use a PowerExchange for Hive connection if you want the Hive driver to run mappings in the
Hadoop cluster. To use the Hive driver to run mappings in the Hadoop cluster, use a Hadoop connection.
Big Data 69
• The following properties are removed from the connection and imported into the cluster configuration:
Property Description
Default FS URI The URI to access the default Hadoop Distributed File System.
JobTracker/Yarn Resource Manager URI The service within Hadoop that submits the MapReduce tasks to
specific nodes in the cluster.
Hive Warehouse Directory on HDFS The absolute HDFS file path of the default database for the
warehouse that is local to the cluster.
Metastore Database URI The JDBC connection URI used to access the data store in a local
metastore setup.
Metastore Database Driver Driver class name for the JDBC data store.
Metastore Database Password The password for the metastore user name.
Remote Metastore URI The metastore URI used to access metadata in a remote metastore
setup.
This property is imported into the cluster configuration as the
property [Link].
Execution Environment
Effective in version 10.2, you can configure the Reject File Directory as a new property in the Hadoop
Execution Environment.
Name Value
Reject File The directory for Hadoop mapping files on HDFS when you run mappings in the Hadoop environment.
Directory The Blaze engine can write reject files to the Hadoop environment for flat file, HDFS, and Hive targets.
The Spark and Hive engines can write reject files to the Hadoop environment for flat file and HDFS
targets.
Choose one of the following options:
- On the Data Integration Service machine. The Data Integration Service stores the reject files based
on the RejectDir system parameter.
- On the Hadoop Cluster. The reject files are moved to the reject directory configured in the Hadoop
connection. If the directory is not configured, the mapping will fail.
- Defer to the Hadoop Connection. The reject files are moved based on whether the reject directory is
enabled in the Hadoop connection properties. If the reject directory is enabled, the reject files are
moved to the reject directory configured in the Hadoop connection. Otherwise, the Data Integration
Service stores the reject files based on the RejectDir system parameter.
Monitoring
Effective in version 10.2, the AllHiveSourceTables row in the Summary Statistics view in the Administrator
tool includes records read from the following sources:
For more information, see the "Monitoring Mappings in the Hadoop Environment" chapter of the Big Data
Management 10.2 User Guide.
• [Link]
• [Link]
• [Link]
• [Link]
• [Link]
• [Link]
Sensitive properties are included but masked when you generate a cluster configuration archive file to deploy
on the machine that runs the Developer tool.
Big Data 71
Previously, you configured these properties in .xml configuration files on the machines that run the Data
Integration Service and the Developer tool.
For more information about sensitive properties, see the Informatica Big Data Management 10.2 Administrator
Guide.
Sqoop
Effective in version 10.2, if you create a password file to access a database, Sqoop ignores the password file.
Sqoop uses the value that you configure in the Password field of the JDBC connection.
For more information, see the "Mapping Objects in the Hadoop Environment" chapter in the Informatica Big
Data Management 10.2 User Guide.
Command Description
BackupData Backs up HDFS data in the internal Hadoop cluster to a zip file. When you back up the data, the
Informatica Cluster Service saves all the data created by Enterprise Information Catalog, such as
HBase data, scanner data, and ingestion data.
removesnapshot Removes existing HDFS snapshots so that you can run the infacmd ihs BackupData command
successfully to back up HDFS data.
• The product Informatica Live Data Map is renamed to Informatica Enterprise Information Catalog.
• The Informatica Live Data Map Administrator tool is renamed to Informatica Catalog Administrator.
• The installer is renamed from Live Data Map to Enterprise Information Catalog.
Informatica Analyst
This section describes changes to the Analyst tool in 10.2.
Parameters
This section describes changes to Analyst tool parameters.
System Parameters
Effective in version 10.2, the Analyst tool displays the file path of system parameters in the following format:
$$[Parameter Name]/[Path].
Previously, the Analyst tool displayed the local file path of the data object and did not resolve the system
parameter.
For more information about viewing data objects, see the Informatica 10.2 Analyst Tool Guide.
Intelligent Streaming
This section describes the changes to Informatica Intelligent Streaming in 10.2.
For more information, see the "Sources and Targets in a Streaming Mapping" chapter in the Informatica
Intelligent Streaming 10.2 User Guide.
• You can provide the folder path without specifying the bucket name in the advanced properties for read
and write operation in the following format: /<folder_name>. The Data Integration Service appends this
folder path with the folder path that you specify in the connection properties.
Previously, you specified the bucket name along with the folder path in the advanced properties for read
and write operation in the following format: <bucket_name>/<folder_name>.
• You can view the bucket name directory following sub directory list in the left panel and selected list of
files in the right panel of metadata import browser.
Previously, PowerExchange for Amazon S3 displayed the list of bucket names in the left panel and folder
path along with file names in right panel of metadata import browser.
• PowerExchange for Amazon S3 creates the data object read operation and data object write operation for
the Amazon S3 data object automatically.
Previously, you had to create the data object read operation and data object write operation for the
Amazon S3 data object manually.
For more information, see the Informatica PowerExchange for Amazon S3 10.2 User Guide
Previously, mappings would run even if the public schema was selected.
For more information, see the Informatica PowerExchange for Amazon Redshift 10.2 User Guide for
PowerCenter.
For more information, see the Informatica PowerExchange for Email Server 10.2 User Guide for PowerCenter.
For more information, see the Informatica PowerExchange for JD Edwards EnterpriseOne 10.2 User Guide for
PowerCenter.
For more information, see the Informatica PowerExchange for JD Edwards World 10.2 User Guide for
PowerCenter.
For more information, see the Informatica PowerExchange for LDAP 10.2 User Guide for PowerCenter.
For more information, see the Informatica PowerExchange for Lotus Notes 10.2 User Guide for PowerCenter.
For more information, see the Informatica PowerExchange for Oracle E-Business Suite 10.2 User Guide for
PowerCenter.
For more information, see the Informatica PowerExchange for Siebel 10.2 User Guide for PowerCenter.
PowerExchange Adapters 75
Security
This section describes changes to security features in 10.2.
SAML Authentication
Effective in version 10.2, you must configure Security Assertion Markup Language (SAML) authentication at
the domain level, and on all gateway nodes within the domain.
Previously, you had to configure SAML authentication at the domain level only.
For more information, see the "SAML Authentication for Informatica Web Applications" chapter in the
Informatica 10.2 Security Guide.
Transformations
This section describes changed transformation behavior in 10.2.
Informatica Transformations
This section describes the changes to the Informatica transformations in 10.2.
The Address Validator transformation contains the following updates to address functionality:
All Countries
Effective in version 10.2, the Address Validator transformation uses version 5.11.0 of the Informatica
Address Verification software engine. The engine enables the features that Informatica adds to the Address
Validator transformation in version 10.2.
Previously, the transformation used version 5.9.0 of the Informatica Address Verification software engine.
Japan
Effective in version 10.2, you can configure a single mapping to return the Choumei Aza code for a current
address in Japan. To return the code, select the Current Choumei Aza Code JP port. You can use the code to
find the current version of any legacy address that Japan Post recognizes.
Previously, you used the New Choumei Aza Code JP port to return incremental changes to the Choumei Aza
code for an address. The transformation did not include the Current Choumei Aza Code JP port. You needed
to configure two or more mappings to verify a current Choumei Aza code and the corresponding address.
United Kingdom
Effective in version 10.2, you can configure the Address Validator transformation to return postal,
administrative, and traditional county information from the Royal Mail Postcode Address File. The
transformation returns the information on the Province ports.
Previously, the transformation returned postal county information when the information was postally
relevant.
Postal Province 1
Administrative Province 2
Traditional Province 3
• Address Matching Approval System (AMAS) from Australia Post. Updated to Cycle 2017.
• SendRight certification from New Zealand Post. Updated to Cycle 2017.
• Software Evaluation and Recognition Program (SERP) from Canada Post. Updated to Cycle 2017.
Informatica continues to support the current versions of the Coding Accuracy Support System (CASS)
standards from the United States Postal Service and the Service National de L'Adresse (SNA) standard from
La Poste of France.
For more information, see the Informatica 10.2 Developer Transformation Guide and the Informatica 10.2
Address Validator Port Reference.
For comprehensive information about the updates to the Informatica Address Verification software engine
from version 5.9.0 through version 5.11.0, see the Informatica Address Verification 5.11.0 Release Guide.
Expression Transformation
Effective in version 10.2, you can configure the Expression transformation to be an active transformation on
the Spark engine by using a window function or an aggregate function with windowing properties.
For more information, see the Big Data Management 10.2 Administrator Guide.
Workflows
This section describes changed workflow behavior in version 10.2.
Informatica Workflows
This section describes the changes to Informatica workflow behavior in 10.2.
For more information, see the "Human Task" chapter in the Informatica 10.2 Developer Workflow Guide.
Workflows 77
Chapter 4
• PowerExchange Adapters, 78
PowerExchange Adapters
This section describes release tasks for PowerExchange adapters in version 10.2.
For more information, see the Informatica 10.2 PowerExchange for Amazon Redshift User Guide for
PowerCenter
Property Description
Access Key The access key ID used to access the Amazon account resources. Required if you do not use AWS
Identity and Access Management (IAM) authentication.
Note: Ensure that you have valid AWS credentials before you create a connection.
Secret Key The secret access key used to access the Amazon account resources. This value is associated with
the access key and uniquely identifies the account. You must specify this value if you specify the
access key ID. Required if you do not use AWS Identity and Access Management (IAM)
authentication.
Master Optional. Provide a 256-bit AES encryption key in the Base64 format when you enable client-side
Symmetric Key encryption. You can generate a key using a third-party tool.
If you specify a value, ensure that you specify the encryption type as client side encryption in the
target session properties.
78
For more information, see the Informatica 10.2 PowerExchange for Amazon S3 User Guide for PowerCenter
• For the client, if you upgrade from 9.x to 10.2, copy the local_policy.jar, US_export_policy.jar, and
cacerts files from the following 9.x installation folder <Informatica installation directory>\clients
\java\jre\lib\security to the following 10.2 installation folder <Informatica installation
directory>\clients\java\32bit\jre\lib\security.
If you upgrade from 10.x to 10.2, copy the local_policy.jar, US_export_policy.jar, and cacerts files
from the following 10.x installation folder <Informatica installation directory>\clients\java
\32bit\jre\lib\security to the corresponding 10.2 folder.
• For the server, copy the local_policy.jar, US_export_policy.jar, and cacerts files from the
<Informatica installation directory>java/jre/lib/security folder of the previous release to the
corresponding 10.2 folder.
When you upgrade from an earlier version, you must copy the msdcrm folder in the installation location of
10.2.
• For the client, copy the msdcrm folder from the <Informatica installation directory>\clients
\PowerCenterClient\client\bin\javalib folder of the previous release to the corresponding 10.2
folder.
• For the server, copy the msdcrm folder from the <Informatica installation directory>/server/bin/
javalib folder of the previous release to the corresponding 10.2 folder.
Effective in version 10.2, Informatica dropped support for the CPI-C protocol.
Use the RFC or HTTP protocol to generate and install ABAP programs while reading data from SAP
tables.
If you upgrade ABAP mappings that were generated with the CPI-C protocol, you must complete the
following tasks:
1. Regenerate and reinstall the ABAP program by using stream (RFC/HTTP) mode.
2. Create a System user or a communication user with the appropriate authorization profile to enable
dialog-free communication between SAP and Informatica.
For more information, see the Informatica PowerExchange for SAP NetWeaver 10.2 User Guide for
PowerCenter.
Effective in version 10.2, Informatica dropped support for the ABAP table reader standard transports.
Informatica will not ship the standard transports for ABAP table reader. Informatica will ship only secure
transports for ABAP table reader.
If you upgrade from an earlier version, you must delete the standard transports and install the secure
transports.
For more information, see the Informatica PowerExchange for SAP NetWeaver 10.2 Transport Versions
Installation Notice.
PowerExchange Adapters 79
Added Support for HTTP Streaming for ABAP Table Reader Mappings
Effective in version 10.2, when you run ABAP mappings to read data from SAP tables, you can configure
HTTP streaming.
To use HTTP stream mode for upgraded ABAP mappings, perform the following tasks:
Note: If you configure HTTP streaming, but do not regenerate and reinstall the ABAP program in stream
mode, the session fails.
81
Chapter 5
For more information, see the Informatica PowerExchange for Cloud Applications 10.1.1 HotFix 1 User Guide.
82
infacmd dis Commands (10.1.1 HF1)
The following table describes new infacmd dis commands:
Command Description
disableMappingValidationEnvironment Disables the mapping validation environment for mappings that are deployed
to the Data Integration Service.
enableMappingValidationEnvironment Enables a mapping validation environment for mappings that are deployed to
the Data Integration Service.
setMappingExecutionEnvironment Specifies the mapping execution environment for mappings that are
deployed to the Data Integration Service.
For more information, see the "Infacmd dis Command Reference" chapter in the Informatica 10.1.1 HotFix1
Command Reference.
Command Description
disableMappingValidationEnvironment Disables the mapping validation environment for mappings that you run from
the Developer tool.
enableMappingValidationEnvironment Enables a mapping validation environment for mappings that you run from
the Developer tool.
setMappingExecutionEnvironment Specifies the mapping execution environment for mappings that you run
from the Developer tool.
For more information, see the "Infacmd mrs Command Reference" chapter in the Informatica 10.1.1 HotFix1
Command Reference.
infacmd ps Command
The following table describes a new infacmd ps command:
Command Description
restoreProfilesAndScorecards Restores profiles and scorecards from a previous version to version 10.1.1 HotFix 1.
For more information, see the "infacmd ps Command Reference" chapter in the Informatica 10.1.1 HotFix 1
Command Reference.
Informatica Analyst
This section describes new Analyst tool features in version 10.1.1 HotFix 1.
For more information about scorecards, see the "Scorecards in Informatica Analyst" chapter in the
Informatica 10.1.1 HotFix1 Data Discovery Guide.
PowerCenter
This section describes new PowerCenter features in version 10.1.1 HotFix 1.
For more information, see the Informatica PowerCenter 10.1.1 HotFix 1 Advanced Workflow Guide.
For more information, see the Informatica PowerCenter 10.1.1 HotFix 1 Advanced Workflow Guide.
PowerExchange Adapters
This section describes new PowerExchange adapter features in version 10.1.1 HotFix 1.
• You can read data from or write data to the following regions:
- Asia Pacific (Mumbai)
- Canada (Central)
- US East (Ohio)
• PowerExchange for Amazon Redshift supports the asterisk pushdown operator (*) that can be pushed to
the Amazon Redshift database by using source-side, target-side, or full pushdown optimization.
• For client-side and server-side encryption, you can configure the customer master key ID generated by
AWS Key Management Service (AWS KMS) in the connection.
For more information, see the Informatica 10.1.1 HotFix 1 PowerExchange for Amazon Redshift User Guide for
PowerCenter.
• You can read data from or write data to the following regions:
- Asia Pacific (Mumbai)
- Canada (Central)
- US East (Ohio)
• For client-side and server-side encryption, you can configure the customer master key ID generated by
AWS Key Management Service (AWS KMS) in the connection.
• When you write data to the Amazon S3 buckets, you can compress the data in GZIP format.
• You can override the Amazon S3 folder path when you run a mapping.
For more information, see the Informatica PowerExchange for Amazon S3 10.1.1 HotFix 1 User Guide for
PowerCenter.
For more information, see the Informatica PowerExchange for Microsoft Azure Blob Storage 10.1.1 HotFix 1
User Guide.
• Update as Update. The PowerCenter Integration Service updates all rows as updates.
• Update else Insert. The PowerCenter Integration Service updates existing rows and inserts other rows as
if marked for insert.
• Delete. The PowerCenter Integration Service deletes the specified records from Microsoft Azure SQL Data
Warehouse.
For more information, see the Informatica PowerExchange for Microsoft Azure SQL Data Warehouse 10.1.1
HotFix 1 User Guide for PowerCenter.
• Add row reject reason. Select to include the reason for rejection of rows to the reject file.
• Alternate Key Name. Indicates whether the column is an alternate key for an entity. Specify the name of
the alternate key. You can use alternate key in update and upsert operations.
For more information, see the Informatica PowerExchange for Microsoft Dynamics CRM 10.1.1 HotFix 1 User
Guide for PowerCenter.
For more information, see the Informatica PowerExchange for SAP NetWeaver 10.1.1 HotFix 1 User Guide.
Support Changes
Effective in version 10.1.1 HF1, the following changes apply to Informatica support for third-party platforms
and systems:
Amazon EMR 5.4 To enable support for Amazon EMR 5.4, apply EBF-9585 to Big Data
Management 10.1.1 Hot Fix 1.
Big Data Management version 10.1.1 Update 2 supports Amazon EMR
5.0.
Cloudera CDH 5.8, 5.9, 5.10, 5.11 Added support for versions 5.10, 5.11.
Hortonworks HDP 2.3, 2.4, 2.5, 2.6 Added support for version 2.6.
To see a list of the latest supported versions, see the Product Availability Matrix on the Informatica Customer
Portal: [Link]
PowerExchange for MapR-DB uses the HBase API to connect to MapR-DB. To connect to a MapR-DB table,
you must create an HBase connection in which you must specify the database type as MapR-DB. You must
create an HBase data object read or write operation, and add it to a mapping to read or write data.
You can validate and run mappings in the native environment or on the Blaze engine in the Hadoop
environment.
For more information, see the Informatica PowerExchange for MapR-DB 10.1.1 Update 2 User Guide.
87
Big Data Management
This section describes new big data features in version 10.1.1 Update 2.
Truncate Hive table partitions on mappings that use the Blaze run-time engine
Effective in version 10.1.1 Update 2, you can truncate Hive table partitions on mappings that use the
Blaze run-time engine.
For more information about truncating partitions in a Hive target, see the Informatica 10.1.1 Update 2 Big
Data Management User Guide.
Effective in version 10.1.1 Update 2, the Blaze engine can push filters on partitioned columns down to
the Hive source to increase performance.
When a mapping contains a Filter transformation on a partitioned column of a Hive source, the Blaze
engine reads only the partitions with data that satisfies the filter condition. To enable the Blaze engine to
read specific partitions, the Filter transformation must be the next transformation after the source in the
mapping.
For more information, see the Informatica 10.1.1 Update 2 Big Data Management User Guide.
Effective in version 10.1.1 Update 2, you can configure OraOop to run Sqoop mappings on the Spark
engine. When you read data from or write data to Oracle, you can configure the direct argument to
enable Sqoop to use OraOop.
OraOop is a specialized Sqoop plug-in for Oracle that uses native protocols to connect to the Oracle
database. When you configure OraOop, the performance improves.
For more information, see the Informatica 10.1.1 Update 2 Big Data Management User Guide.
Effective in version 10.1.1 Update 2, if you use a Teradata PT connection to run a mapping on a Cloudera
cluster and on the Blaze engine, the Data Integration Service invokes the Cloudera Connector Powered
by Teradata at run time. The Data Integration Service then runs the mapping through Sqoop.
For more information, see the Informatica 10.1.1 Update 2 PowerExchange for Teradata Parallel
Transporter API User Guide.
Effective in version 10.1.1 Update 2, the following schedulers are valid for Hadoop distributions on both
Blaze and Spark engines:
• Fair Scheduler. Assigns resources to jobs such that all jobs receive, on average, an equal share of
resources over time.
• Capacity Scheduler. Designed to run Hadoop applications as a shared, multi-tenant cluster. You can
configure Capacity Scheduler with or without node labeling. Node label is a way to group nodes with
similar characteristics.
For more information, see the Mappings in the Hadoop Environment chapter of the Informatica 10.1.1
Update 2 Big Data Management User Guide.
Effective in version 10.1.1 Update 2, you can direct Blaze and Spark jobs to a specific YARN scheduler
queue. Queues allow multiple tenants to share the cluster. As you submit applications to YARN, the
scheduler assigns them to a queue. You configure the YARN queue in the Hadoop connection properties.
Effective in version 10.1.1 Update 2, you can use the following Hadoop security features on the IBM
BigInsights 4.2 Hadoop distribution:
• Apache Knox
• Apache Ranger
• HDFS Transparent Encryption
For more information, see the Informatica 10.1.1 Update 2 Big Data Management Security Guide.
Effective in version 10.1.1 Update 2, you can use the SSL and TLS security modes on the Cloudera and
HortonWorks Hadoop distributions, including the following security methods and plugins:
• Kerberos authentication
• Apache Ranger
• Apache Sentry
• Name node high availability
• Resource Manager high availability
For more information, see the Informatica 10.1.1 Update 2 Big Data Management Installation and
Configuration Guide.
Effective in version 10.1.1 Update 2, Big Data Management supports reading and writing to Hive on
Amazon S3 buckets for clusters configured with the following Hadoop distributions:
• Amazon EMR
• Cloudera
• HortonWorks
• MapR
• BigInsights
For more information, see the Informatica 10.1.1 Update 2 Big Data Management User Guide.
Effective in version 10.1.1 Update 2, you can create a File System resource to import metadata from files
in Windows and Linux file systems.
For more information, see the Informatica 10.1.1 Update 2 Live Data Map Administrator Guide.
Effective in version 10.1.1 Update 2, you can deploy Enterprise Information Catalog on Apache Ranger-
enabled clusters. Apache Ranger provides a security framework to manage the security of the clusters.
Effective in version 10.1.1 Update 2, you can deploy Informatica Cluster Service on hosts where Centrify
is enabled. Centrify integrates with an existing Active Directory infrastructure to manage user
authentication on remote Linux hosts.
Hadoop ecosystem
Effective in version 10.1.1 Update 2, you can use following Hadoop distributions as a Hadoop data lake:
Effective in version 10.1.1 Update 2, you can use MariaDB 10.0.28 for the Data Preparation Service
repository.
Effective in version 10.1.1 Update 2, data analysts can view lineage of individual columns in a table
corresponding to activities such as data asset copy, import, export, publication, and upload.
SSL/TLS support
Effective in version 10.1.1 Update 2, you can integrate Intelligent Data Lake with Cloudera 5.9 clusters
that are SSL/TLS enabled.
For more information, see the Informatica 10.1.1 Update 2 PowerExchange for Amazon Redshift User Guide.
The following table lists the supported Hadoop distribution versions and changes in 10.1.1 Update 2:
Cloudera CDH 5.8, 5.9, 5.10 * Added support for version 5.10.
Hortonworks HDP 2.3, 2.4, 2.5 Added support for versions 2.3 and 2.4.
*Azure HDInsight 3.5 and Cloudera CDH 5.10 are available for technical preview. Technical preview functionality is
supported but is not production-ready. Informatica recommends that you use in non-production environments only.
For a complete list of Hadoop support, see the Product Availability Matrix on Informatica Network:
[Link]
Dropped support for Teradata Connector for Hadoop (TDCH) and Teradata PT objects on the Blaze engine
Effective in version 10.1.1 Update 2, Informatica dropped support for Teradata Connector for Hadoop
(TDCH) on the Blaze engine. The configuration for Sqoop connectivity in 10.1.1 Update 2 depends on the
Hadoop distribution:
IBM BigInsights and MapR
You can configure Sqoop connectivity through the JDBC connection. For information about
configuring Sqoop connectivity through JDBC connections, see the Informatica 10.1.1 Update 2 Big
Data Management User Guide.
Cloudera CDH
You can configure Sqoop connectivity through the Teradata PT connection and the Cloudera
Connector Powered by Teradata.
1. Download the Cloudera Connector Powered by Teradata .jar files and copy them to the node
where the Data Integration Service runs. For more information, see the Informatica 10.1.1
Update 2 PowerExchange for Teradata Parallel Transporter API User Guide.
2. Move the configuration parameters that you defined in the [Link] file to the
Additional Sqoop Arguments field in the Teradata PT connection. See the Cloudera Connector
Powered by Teradata documentation for a list of arguments that you can specify.
You can configure Sqoop connectivity through the Teradata PT connection and the Hortonworks
Connector for Teradata.
1. Download the Hortonworks Connector for Teradata .jar files and copy them to the node where
the Data Integration Service runs. For more information, see the Informatica 10.1.1 Update 2
PowerExchange for Teradata Parallel Transporter API User Guide.
2. Move the configuration parameters that you defined in the [Link] file to the
Additional Sqoop Arguments field in the Teradata PT connection. See the Hortonworks
Connector for Teradata documentation for a list of arguments that you can specify.
Note: You can continue to use TDCH on the Hive engine through Teradata PT connections.
Deprecated support of Sqoop connectivity through Teradata PT data objects and Teradata PT connections
Effective in version 10.1.1 Update 2, Informatica deprecated Sqoop connectivity through Teradata PT
data objects and Teradata PT connections for Cloudera CDH and Hortonworks. Support will be dropped
in a future release.
To read data from or write data to Teradata by using TDCH and Sqoop, Informatica recommends that
you configure Sqoop connectivity through JDBC connections and relational data objects.
Sqoop
Effective in version 10.1.1 Update 2, you can no longer override the user name and password in a Sqoop
mapping by using the --username and --password arguments. Sqoop uses the values that you configure in the
User Name and Password fields of the JDBC connection.
For more information, see the Informatica 10.1.1 Update 2 Big Data Management User Guide.
Asset path
Effective in version 10.1.1 Update 2, you can view the path to the asset in the Asset Details view along
with other general information about the asset.
For more information, see the Informatica 10.1.1 Update 2 Enterprise Information Catalog User Guide.
Effective in version 10.1.1 Update 2, the profile results section for tabular assets also includes business
terms. Previously, the profile results section included column names, data types, and data domains.
For more information, see the Informatica 10.1.1 Update 2 Enterprise Information Catalog User Guide.
Effective in version 10.1.1 Update 2, if you had configured a custom attribute to allow you to enter URLs
as the attribute value, you can assign multiple URLs as attribute values to a technical asset.
For more information, see the Informatica 10.1.1 Update 2 Enterprise Information Catalog User Guide.
Effective in version 10.1.1 Update 2, you can configure the following resources to automatically detect
headers for CSV files from which you extract metadata:
• Amazon S3
• HDFS
• File System
For more information, see the Informatica 10.1.1 Update 2 Live Data Map Administrator Guide.
Effective in version 10.1.1 Update 2, you can import multiple schemas for an Amazon Redshift resource.
For more information, see the Informatica 10.1.1 Update 2 Live Data Map Administrator Guide.
Effective in version 10.1.1 Update 2, you can run Hive resources on Data Integration Service for profiling.
For more information, see the Informatica 10.1.1 Update 2 Live Data Map Administrator Guide.
If you upgrade to version 10.1.1 Update 2, the PowerExchange for Redshift mappings created in earlier
versions must have the relevant schema name in the connection property. Else, mappings fail when you run
them on version 10.1.1 Update 2.
For more information, see the Informatica 10.1.1 Update 2 PowerExchange for Amazon Redshift User Guide.
For more information, see the Informatica 10.1.1 Update 1 PowerExchange for Teradata Parallel Transporter
API User Guide.
For more information, see the Informatica 10.1.1 Update 1 PowerExchange for Teradata Parallel Transporter
API User Guide.
94
PowerExchange Adapters for Informatica
This section describes PowerExchange adapter changes in version 10.1.1 Update 1.
• Folder Path
• Download S3 File in Multiple Parts
• Staging Directory
Previously, the advanced properties for an Amazon S3 data object read and write operation were:
• S3 Folder Path
• Enable Download S3 Files in Multiple Parts
• Local Temp Folder Path
For more information, see the Informatica 10.1.1 Update 1 PowerExchange for Amazon S3 User Guide.
If you had configured Teradata Connector for Hadoop (TDCH) to run Teradata mappings on the Blaze engine
and installed 10.1.1 Update 1, the Data Integration Service ignores the TDCH configuration. You must
perform the following upgrade tasks to run Teradata mappings on the Blaze engine:
Note: If you had configured TDCH to run Teradata mappings on the Blaze engine and on a distribution other
than Hortonworks, do not install 10.1.1 Update 1. You can continue to use version 10.1.1 to run mappings
with TDCH on the Blaze engine and on a distribution other than Hortonworks.
For more information, see the Informatica 10.1.1 Update 1 PowerExchange for Teradata Parallel Transporter
API User Guide.
• Intelligent Streaming, 96
• PowerExchange Adapters, 97
Intelligent Streaming
With the advent of big data technologies, organizations are looking to derive maximum benefit from the
velocity of data, capturing it as it becomes available, processing it, and responding to events in real time. By
adding real-time streaming capabilities, organizations can leverage the lower latency to create a complete,
up-to-date view of customers, deliver real-time operational intelligence to customers, improve fraud
detection, reduce security risk, improve physical asset management, improve total customer experience, and
generally improve their decision-making processes by orders of magnitude.
In 10.1.1, Informatica introduces Intelligent Streaming, a new product to help IT derive maximum value from
real-time queues by streaming data, processing it, and extracting meaningful business value in near real time.
Customers can process diverse data types and from non-traditional sources, such as website log file data,
sensor data, message bus data, and machine data, in flight and with high degrees of accuracy.
Intelligent Streaming is built as a capability extension of Informatica's Intelligent Data Platform and provides
the following benefits for IT:
You can stream the following types of data from sources such as Kafka or JMS, in JSON, XML, or Avro
formats:
96
• Change data capture (CDC) from relational databases
• Clickstreams from web servers
• Social media event streams
• Time-series data from IoT devices
• Message bus data
• Programmable logic controller (PLC) data
• Point of sale data from devices
In addition, Informatica customers can leverage Informatica's Vibe Data Stream (licensed separately) to
collect and ingest data in real time, for example, data from sensors, and machine logs, to a Kafka queue.
Intelligent Streaming can then process this data.
Use the underlying processing platform to run the following complex data transformations in real time
without coding or scripting:
• Window Transformation for Streaming use cases with the option of sliding and tumbling windows.
• Filter, Expression, Union, Router, Aggregate, Joiner, Lookup, Java, and Sorter transformations can
now be used with Streaming mappings and are executed on Spark Streaming.
• Lookup transformations can be used with Flat file, HDFS, Sqoop, and Hive.
Publish Data
You can stream data to different types of targets, such as Kafka, HDFS, NoSQL databases, and
enterprise messaging systems.
Intelligent Streaming is built on the Informatica Big Data Platform platform and extends the platform to
provide streaming capabilities. Intelligent Streaming uses Spark Streaming to process streamed data. It uses
YARN to manage the resources on a Spark cluster more efficiently and uses third-parties distributions to
connect to and push job processing to a Hadoop environment.
Use Informatica Developer (the Developer tool) to create streaming mappings. Use the Hadoop run-time
environment and the Spark engine to run the mapping. You can configure high availability to run the
streaming mappings on the Hadoop cluster.
For more information about Intelligent Streaming, see the Informatica Intelligent Streaming User Guide.
PowerExchange Adapters
For more information, see the Informatica PowerExchange for Amazon S3 10.1.1 User Guide.
PowerExchange Adapters 97
Chapter 9
• Application Services, 98
• Big Data, 99
• Business Glossary , 103
• Command Line Programs, 103
• Enterprise Information Catalog, 105
• Informatica Analyst, 108
• Informatica Installation, 108
• Intelligent Data Lake, 109
• Mappings , 110
• Metadata Manager, 110
• PowerExchange Adapters, 111
• Security, 113
• Transformations, 113
• Web Services , 117
• Workflows, 117
Application Services
This section describes new application service features in version 10.1.1.
Analyst Service
Effective in version 10.1.1, you can configure an Analyst Service to store all audit data for exception
management tasks in a single database. The database stores a record of the work that users perform on
Human task instances in the Analyst tool that the Analyst Service specifies.
Set the database connection and the schema for the audit tables on the Human task properties of the Analyst
Service in the Administrator tool. After you specify a connection and schema, use the Actions menu options
in the Administrator tool to create the audit database contents. Or, use the infacmd as commands to set the
database and schema and to create the audit database contents. To set the database and the schema, run
infacmd as updateServiceOptions. To create the database contents, run infacmd as
createExceptionAuditTables
98
If you do not specify a connection and schema, the Analyst Service creates audit tables for each task
instance in the database that stores the task instance data.
For more information, see the Informatica 10.1.1 Application Service Guide and the Informatica 10.1.1
Command Reference.
Big Data
This section describes new big data features in version 10.1.1.
Blaze Engine
Effective in version 10.1.1, the Blaze engine has the following new features:
• Lookup transformation. You can use SQL overrides and filter queries with Hive lookup sources.
• Sorter transformation. Global sorts are supported when the Sorter transformation is connected to a flat
file target. To maintain global sort order, you must enable the Maintain Row Order property in the flat file
target. If the Sorter transformation is midstream in the mapping, then rows are sorted locally.
• Update Strategy transformation. The Update Strategy transformation is supported with some restrictions.
For more information, see the "Mapping Objects in the Hadoop Environment" chapter in the Informatica Big
Data Management 10.1.1 User Guide.
The Blaze Summary Report contains the following information about a mapping job:
• Time taken by individual segments. A pie chart of segments within the grid task.
Big Data 99
• Mapping properties. A table containing basic information about the mapping job.
• Tasklet execution time. A time series graph of all tasklets within the selected segment.
• Selected tasklet information. Source and target row counts and cache information for each individual
tasklet.
Note: The Blaze Summary Report is in beta. It contains most of the major features, but is not yet complete.
• Execution statistics are available in the LDTM log when the log tracing level is set to verbose initialization
or verbose data. The log includes the following mapping execution details:
- Start time, end time, and state of each task
When you run an address validation mapping in a Hadoop environment, the reference data files must reside
on each compute node on which the mapping runs. Use the script to install the reference data files on
multiple nodes in a single operation.
Find the script in the following directory in the Informatica Big Data Management installation:
When you run the script, you can enter the following information:
For more information, see the Informatica Big Data Management 10.1.1 Installation and Configuration Guide.
For more information about configuring Big Data Management in silent mode, see the Informatica Big Data
Management 10.1.1 Installation and Configuration Guide.
For more information about installing Big Data Management in an Ambari stack, see the Informatica 10.1.1
Big Data Management Installation and Configuration Guide.
For more information about using the script to populate the HDFS file system, see the Informatica Big Data
Management 10.1.1 Installation and Configuration Guide.
Spark Engine
Effective in version 10.1.1, the Spark engine has the following new features:
• DEC_BASE64
• ENC_BASE64
• MD5
• UUID4
• UUID_UNPARSE
• CRC32
• COMPRESS
• DECOMPRESS (ignores precision)
• AES Encrypt
• AES Decrypt
Note: The Spark engine does not support binary data type for the join and lookup conditions.
For more information, see the "Function Reference" chapter in the Informatica Big Data Management 10.1.1
User Guide.
You can view the following Spark summary statistics in the Summary Statistics view:
For more information, see the "Mapping Objects in the Hadoop Environment" chapter in the Informatica Big
Data Management 10.1.1 User Guide.
Security
This section describes new big data security features in version 10.1.1.
For more information, see the Authorization section in the "Introduction to Big Data Management Security"
chapter of the Informatica 10.1.1 Big Data Management Security Guide.
These specialized connectors use native protocols to connect to the Teradata database.
For more information, see the Informatica 10.1.1 Big Data Management User Guide.
Business Glossary
This section describes new Business Glossary features in version 10.1.1.
For more information, see the "Glossary Administration " chapter in the Informatica 10.1.1 Business Glossary
Guide.
The import option is available in the glossary import wizard and in the command line program.
For more information, see the "Glossary Administration" chapter in the Informatica 10.1.1 Business Glossary
Guide.
Command Description
CreateExceptionAuditTables Creates the audit tables for the Human task instances that the Analyst Service
specifies.
DeleteExceptionAuditTables Deletes the audit tables for the Human task instances that the Analyst Service
specifies.
Command Description
UpdateServiceOptions - [Link]
Identifies the database to store the audit trail tables for exception management tasks.
- [Link]
Identifies the schema to store the audit trail tables for exception management tasks.
For more information, see the "Infacmd as Command Reference" chapter in the Informatica 10.1.1 Command
Reference.
Command Description
For more information, see the "infacmd dis Command Reference" chapter in the Informatica 10.1.1 Command
Reference.
Command Description
replaceMappingHadoopRuntimeConnections Replaces the Hadoop connection of all mappings in the repository with
another Hadoop connection. The Data Integration Service uses the
Hadoop connection to connect to the Hadoop cluster to run mappings
in the Hadoop environment.
For more information, see the "infacmd mrs Command Reference" chapter in the Informatica 10.1.1
Command Reference.
Command Description
For more information, see the "pmrep Command Reference" chapter in the Informatica 10.1.1 Command
Reference.
You can perform the following tasks with business glossary assets:
You can search for and view the full details for a business term, category, or policy in Enterprise
Information Catalog. When you view the details for a business term, Enterprise Information Catalog also
displays the glossary assets, technical assets, and other assets, such as Metadata Manager objects, that
the term is related to.
When you view a business glossary asset in the catalog, you can open the asset in the Analyst tool
business glossary for further analysis.
You can associate a business term with a technical asset to make an asset easier to understand and
identify in the catalog. For example, you associate business term "Movie Details" with a relational table
named "mv_dt." Enterprise Information Catalog displays the term "Movie Details" next to the asset name
in the search results, in the Asset Details view, and optionally, in the lineage and impact diagram.
When you associate a term with an asset, Enterprise Information Catalog provides intelligent
recommendations for the association based on data domain discovery.
For more information about business glossary assets, see the "View Assets" chapter in the Informatica 10.1.1
Enterprise Information Catalog User Guide.
Enterprise Information Catalog supports column similarity profiling for the following resource scanners:
• Amazon Redshift
• Amazon S3
• Salesforce
• HDFS
• Hive
• IBM DB2
• IBM DB2 for z/OS
• IBM Netezza
• JDBC
• Microsoft SQL Server
• Oracle
• Sybase
• Teradata
• SAP
A data domain is a predefined or user-defined Model repository object based on the semantics of column
data or a column name. Examples include Social Security number, phone number, and credit card number.
You can create data domains based on data rules or column name rules defined in the Informatica Analyst
Tool or the Informatica Developer Tool. Alternatively, you can create data domains based on existing
columns in the catalog. You can define proximity rules to configure inference for new data domains from
existing data domains configured in the catalog.
• By default, the lineage and impact diagram displays the origins, the asset that you are studying, and
the destinations for the data. You can use the slider controls to reveal intermediate assets one at-a-
time by distance from the seed asset or to fully expand the diagram. You can also expand all assets
within a particular data flow path.
• You can display the child assets of the asset that you are studying, all the way down to the column or
field level. When you drill-down on an asset, the diagram displays the child assets that you select and
the assets to which the child assets are linked.
• You can display the business terms that are associated with the technical assets in the diagram.
• You can print the diagram and export it to a scalable vector graphics (.svg) file.
Impact analysis
When you open the Lineage and Impact view for an asset, you can switch from the diagram view to the
tabular asset summary. The tabular asset summary lists all of the assets that impact and are impacted
by the asset that you are studying. You can export the asset summary to a Microsoft Excel file to create
reports or further analyze the data.
For more information about lineage and impact analysis, see the "View Lineage and Impact" chapter in the
Informatica 10.1.1 Enterprise Information Catalog User Guide.
Extract metadata from the Business intelligence tool from Oracle that includes analysis and reporting
capabilities.
Extract metadata about critical information within an organization from Informatica Master Data
Management.
Microsoft SQL Server Integration Service
Extract metadata about data integration and workflow applications from Microsoft SQL Server
Integration Service.
SAP
Extract metadata from SAP application platform that integrates multiple business applications and
solutions.
Extract metadata from files in Amazon Elastic MapReduce using a Hive resource.
Informatica Analyst
This section describes new Analyst tool features in version 10.1.1.
Profiles
This section describes new Analyst tool features for profiles and scorecards.
Drilldown on Scorecards
Effective in version 10.1.1, when you click a data series or data point in the scorecard dashboard, the
scorecards that map to the data series or data point appears in the assets list pane.
For more information about scorecards, see the "Scorecards in Informatica Analyst" chapter in the
Informatica 10.1.1 Data Discovery Guide.
Informatica Installation
This section describes new installation features in version 10.1.1.
For more information about the upgrade advisor, see the Informatica Upgrade Guides.
For more information, see the "Discover Data" chapter in the 10.1.1 Intelligent Data Lake User Guide.
For more information, see the "Discover Data" chapter in the 10.1.1 Intelligent Data Lake User Guide.
For more information, see the "Discover Data" chapter in the 10.1.1 Intelligent Data Lake User Guide.
For more information, see the "Prepare Data" chapter in the 10.1.1 Intelligent Data Lake User Guide.
For more information, see the "Prepare Data" chapter in the 10.1.1 Intelligent Data Lake User Guide.
For more information, see the "Discover Data" chapter in the 10.1.1 Intelligent Data Lake User Guide.
Mappings
This section describes new mapping features in version 10.1.1.
Informatica Mappings
This section describes new Informatica mappings features in version 10.1.1.
For more information, see the "Mapping Parameters" chapter in the Informatica Developer 10.1.1 Mapping
Guide or the "Workflow Parameters" chapter in the Informatica Developer 10.1.1 Workflow Guide.
Metadata Manager
This section describes new Metadata Manager features in version 10.1.1.
For more information about Cloudera Navigator resources, see the "Database Management Resources"
chapter in the Informatica 10.1.1 Metadata Manager Administrator Guide.
Informatica Platform resources that are based on version 10.1.1 applications can extract metadata for
mappings in deployed workflows in addition to mappings that are deployed directly to the application.
When Metadata Manager extracts a mapping in a deployed workflow, it adds the workflow name and
Mapping task name to the mapping name as a prefix. Metadata Manager displays the mapping in the
metadata catalog within the Mappings logical group.
PowerExchange Adapters
This section describes new PowerExchange adapter features in version 10.1.1
For more information, see the Informatica PowerExchange for Amazon Redshift 10.1.1 User Guide.
• You can use the following advanced ODBC driver configurations with PowerExchange for Cassandra:
- Load balancing policy. Determines how the queries are distributed to nodes in a Cassandra cluster
based on the specified DC Aware or Round-Robin policy.
- Filtering. Limits the connections of the drivers to a predefined set of hosts.
• You can enable the following arguments in the ODBC driver to optimize the performance:
- Token Aware. Improves the query latency and reduces load on the Cassandra node.
- Latency Aware. Ignores the slow performing Cassandra nodes while sending queries.
- Null Value Insertion. Enables you to specify null values in an INSERT statement.
- Case Sensitive. Enables you to specify schema, table, and column names in a case-sensitive fashion.
• You can process Cassandra sources and targets that contain the date, smallint, and tinyint data types
For more information, see the Informatica PowerExchange for Cassandra 10.1.1 User Guide.
For more information, see the Informatica PowerExchange for HBase 10.1.1 User Guide.
For more information, see the Informatica PowerExchange for Hive 10.1.1 User Guide.
• You can configure partitioning for Amazon Redshift sources and targets. You can configure the partition
information so that the PowerCenter Integration Service determines the number of partitions to create at
run time.
• You can include a Pipeline Lookup transformation in a mapping.
• The PowerCenter Integration Service can push expression, aggregator, operator, union, sorter, and filter
functions to Amazon Redshift sources and targets when the connection type is ODBC and the ODBC
Subtype is selected as Redshift.
• You can configure advanced filter properties in a mapping.
• You can configure pre-SQL and post-SQL queries for source and target objects in a mapping.
• You can configure a Source transformation to select distinct rows from the Amazon Redshift table and
sort data.
• You can parameterize source and target table names to override the table name in a mapping.
• You can define an SQL query for source and target objects in a mapping to override the default query. You
can enter an SQL statement supported by the Amazon Redshift database.
For more information, see the Informatica 10.1.1 PowerExchange for Amazon Redshift User Guide for
PowerCenter.
• You can use the following advanced ODBC driver configurations with PowerExchange for Cassandra:
- Load balancing policy. Determines how the queries are distributed to nodes in a Cassandra cluster
based on the specified DC Aware or Round-Robin policy.
- Filtering. Limits the connections of the drivers to a predefined set of hosts.
• You can enable the following arguments in the ODBC driver to optimize the performance:
- Token Aware. Improves the query latency and reduces load on the Cassandra node.
- Latency Aware. Ignores the slow performing Cassandra nodes while sending queries.
- Null Value Insertion. Enables you to specify null values in an INSERT statement.
- Case Sensitive. Enables you to specify schema, table, and column names in a case-sensitive fashion.
• You can process Cassandra sources and targets that contain the date, smallint, and tinyint data types.
For more information, see the Informatica PowerExchange for Cassandra 10.1.1 User Guide for PowerCenter.
To compress data, you must re-register the PowerExchange for Vertica plug-in with the PowerCenter
repository.
For more information, see the Informatica PowerExchange for Vertica 10.1.1 User Guide for PowerCenter.
For more information, see the "Kerberos Authentication Setup" chapter in the Informatica 10.1.1 Security
Guide.
Security Assertion Markup Language is an XML-based data format for exchanging authentication and
authorization information between a service provider and an identity provider. In an Informatica domain, the
Informatica web application is the service provider. Microsoft Active Directory Federation Services (AD FS)
2.0 is the identity provider, which authenticates web application users with your organization's LDAP or
Active Directory identity store.
For more information, see the "Single Sign-on for Informatica Web Applications" chapter in the Informatica
10.1.1 Security Guide.
Transformations
This section describes new transformation features in version 10.1.1.
Informatica Transformations
This section describes new features in Informatica transformations in version 10.1.1.
The Address Validator transformation contains additional address functionality for the following countries:
Security 113
All Countries
Effective in version 10.1.1, you can add the Count Number port to an output address. The Count Number port
value indicates the position of each address in a set of suggestions that the transformation returns in
interactive mode or suggestion list mode.
For example, the Count Number port returns the number 1 for the first address in the set. The port returns the
number 2 for the second address in the set. The number increments by 1 for each address that address
validation returns.
Find the Count Number port in the Status Info port group.
China
Multi-language address parsing and verification
Effective in version 10.1.1, you can configure the Address Validator transformation to return the street
descriptor and street directional information in a valid China address in a transliterated Latin script
(Pinyin) or in English. The transformation returns the other elements in the address in the Hanzi script.
To specify the output language, set the Preferred Language advanced property on the transformation.
Effective in version 10.1.1, you can configure the Address Validator transformation to return valid
suggestions for a China address that you enter on a single line in fast completion mode. To enter an
address on a single line, select a Complete Address port from the Multiline port group. Enter the address
in the Hanzi script.
When you enter a partial address, the transformation returns one or more address suggestions for the
address that you enter. When you enter a complete valid address, the transformation returns the valid
version of the address from the reference database.
Ireland
Multi-language address parsing and verification
Effective in version 10.1.1, you can configure the Address Validator transformation to read and write the
street, locality, and county information for an Ireland address in the Irish language.
An Post, the Irish postal service, maintains the Irish-language information in addition to the English-
language addresses. You can include Irish-language street, locality, and county information in an input
address and retrieve the valid English-language version of the address. You can enter an English-
language address and retrieve an address that includes the street, locality, and county information in the
Irish language. Address validation returns all other information in English.
To specify the output language, set the Preferred Language advanced property on the transformation.
Effective in version 10.1.1, you can configure the Address Validator transformation to return rooftop
geocoordinates for an address in Ireland.
To return the geocoordinates, add the Geocoding Complete port to the output address. Find the
Geocoding Complete port in the Geocoding port group. To specify Rooftop geocoordinates, set the
Geocode Data Type advanced property on the transformation.
Effective in version 10.1.1, you can configure the Address Validator transformation to return the short or
long forms of the following elements in the English language:
• Street descriptors
To specify a preference for the elements, set the Global Preferred Descriptor advanced property on the
transformation,
Note: The Address Validator transformation writes all street information to the street name field in an
Irish-language address.
Italy
Effective in version 10.1.1, you can configure the Address Validator transformation to add the ISTAT code to
a valid Italy address. The ISTAT code contains characters that identify the province, municipality, and region
to which the address belongs. The Italian National Institute of Statistics (ISTAT) maintains the ISTAT codes.
To add the ISTAT code to an address, select the ISTAT Code port. Find the ISTAT Code port in the IT
Supplementary port group.
Japan
Geocoding enrichment for Japan addresses
Effective in version 10.1.1, you can configure the Address Validator transformation to return standard
geocoordinates for addresses in Japan.
The transformation can return geocoordinates at multiple levels of accuracy. When a valid address
contains information to the Ban level, the transformation returns house number-level geocoordinates.
When a valid address contains information to the Chome level, the transformation returns street-level
geocoordinates. If an address does not contain Ban or Chome information, Address Verification returns
locality-level geocoordinates.
To return the geocoordinates, add the Geocoding Complete port to the output address. Find the
Geocoding Complete port in the Geocoding port group.
Effective in version 10.1.1, you can configure the Address Validator transformation to return valid
suggestions for a Japan address that you enter on a single line in suggestion list mode. You can retrieve
suggestions for an address that you enter in the Kanji script or the Kana script. To enter an address on a
single line, select a Complete Address port from the Multiline port group.
When you enter a partial address, the transformation returns one or more address suggestions for the
address that you enter. When you enter a complete valid address, the transformation returns the valid
version of the address from the reference database.
South Korea
Support for Revised Romanization transliteration in South Korea addresses
Effective in version 10.1.1, the Address Validator transformation can use the Revised Romanization
system to transliterate an address between Hangul and Latin character sets. To specify a character set
for output addresses from South Korea, use the Preferred Script advanced property.
Effective in version 10.1.1, the Address Validator transformation adds a five-digit post code to a fully
valid input address that does not include a post code. The five-digit post code represents the current
post code format in use in South Korea. The transformation can add the five-digit post code to a fully
valid lot-based address and a fully valid street-based address.
To verify addresses in the older, lot-based format, use the Matching Extended Archive advanced
property.
Transformations 115
Spain
Effective in version 10.1.1, you can configure the Address Validator transformation to add the INE code to a
valid Spain address. The INE code contains characters that identify the province, municipality, and street in
the address. The National Institute of Statistics (INE) in Spain maintains the INE codes.
To add an INE code to an address, select one or more of the following ports:
United States
Support for CASS Cycle O requirements
Effective in version 10.1.1, the Address Validator transformation adds features that support the
proposed requirements of the Coding Accuracy Support System (CASS) Cycle O standard.
To prepare for the Cycle O standard, the transformation includes the following features:
Effective in version 10.1.1, the Address Validation transformation parses non-standard mailbox data into
sub-building elements. The non-standard data might identify a college campus mailbox or a courtroom
at a courthouse.
Effective in version 10.1.1, you can return the short or long forms of the following elements in a United
States address:
• Street descriptors
For more information, see the Informatica 10.1.1 Developer Transformation Guide and the Informatica 10.1.1
Address Validator Port Reference.
Write Transformation
Effective in version 10.1.1, when you create a Write transformation from an existing transformation in a
mapping, you can specify the type of link for the input ports of the Write transformation.
You can link ports by name. Also, in a dynamic mapping, you can link ports by name, create a dynamic port
based on a mapping flow, or link ports at run time based on a link policy.
For more information, see the "Write Transformation" chapter in the Informatica 10.1.1 Developer
Transformation Guide.
Web Services
This section describes new web services features in version 10.1.1.
An Informatica REST web service is a web service that receives an HTTP request to perform a GET operation.
A GET operation retrieves data. The REST request is a simple URI string from an internet browser. The client
limits the web service output data by adding filter parameters to the URI.
Define a REST web service resource in the Developer tool. A REST web service resource contains the
definition of the REST web service response message and the mapping that returns the response. When you
create an Informatica REST web service, you can define the resource from a data object or you can manually
define the resource.
Workflows
This section describes new workflow features in version 10.1.1.
Informatica Workflows
This section describes new features in Informatica workflows in version 10.1.1.
A workflow terminates if you connect a task or a gateway to a Terminate event and the task output satisfies
a condition on the sequence flow. The Terminate event aborts the workflow before any further task in the
workflow can run.
Add a Terminate event to a workflow if the workflow data can reach a point at which there is no need to run
additional tasks. For example, you might add a Terminate event to end a workflow that contains a Mapping
task and a Human task. Connect the Mapping task to an Exclusive gateway, and then connect the gateway to
a Human task and to a Terminate event. If the Mapping task generates exception record data for the Human
task, the workflow follows the sequence flow to the Human task. If the Mapping task does not generate
exception record data, the workflow follows the sequence flow to the Terminate event.
For more information, see the Informatica 10.1.1 Developer Workflow Guide.
By default, Analyst tool users can view all data and perform any action in the task instances that they work
on.
You can set viewing permissions and editing permissions. The viewing permissions define the data that the
Analyst tool displays for the task instances that the step defines. The editing permissions define the actions
that users can take to update the task instance data. Viewing permissions take precedence over editing
permissions. If you grant editing permissions on a column and you do not grant viewing permissions on the
column, Analyst tool users cannot edit the column data.
For more information, see the Informatica 10.1.1 Developer Workflow Guide.
To display the list of variables, open the Human task and select the step that defines the Human task
instances. On the Notifications view, select the message body of the email notification and press the
$+CTRL+SPACE keys.
$[Link]
The time that the workflow engine performs the user instruction to escalate, reassign, or complete the
task instance.
$[Link]
The owner of the task instance at the time that the workflow engine escalates or completes the task. Or,
the owner of the task instance after the engine reassigns the task instance.
The task instance status after the engine performs the user instruction to escalate, reassign, or
complete the task instance. The status names are READY and IN_PROGRESS.
$[Link]
The type of instruction that the engine performs. The variable values are escalate, reassign, and
complete.
$[Link]
For more information, see the Informatica 10.1.1 Developer Workflow Guide.
Workflows 119
Chapter 10
Changes (10.1.1)
This chapter includes the following topics:
Support Changes
This section describes support changes in version 10.1.1 HotFix 2.
Previously, the Hive engine supported the Hive driver and HiveServer2 to run mappings in the Hadoop
environment. HiveServer2 and the Hive driver convert HiveQL queries to MapReduce or Tez jobs that are
processed on the Hadoop cluster.
If you install Big Data Management 10.1.1 or upgrade to version 10.1.1, the Hive engine uses the Hive driver
when you run the mappings. The Hive engine no longer supports HiveServer2 to run mappings in the Hadoop
environment. Hive sources and targets that use the HiveServer2 service on the Hadoop cluster are still
supported.
120
To run mappings in the Hadoop environment, Informatica recommends that you select all run-time engines.
The Data Integration Service uses a proprietary rule-based methodology to determine the best engine to run
the mapping.
For information about configuring the run-time engines for your Hadoop distribution, see the Informatica Big
Data Management 10.1.1 Installation and Configuration Guide. For information about mapping objects that the
run-time engines support, see the Informatica Big Data Management 10.1.1 User Guide.
To see a list of the latest supported versions, see the Product Availability Matrix on the Informatica Customer
Portal: [Link]
matrices.
MapR Support
Effective in version 10.1.1, Informatica deferred support for Big Data Management on a MapR cluster. To run
mappings on a MapR cluster, use Big Data Management 10.1. Informatica plans to reinstate support in a
future release.
Some references to MapR remain in documentation in the form of examples. Apply the structure of these
examples to your Hadoop distribution.
• Download and install from an RPM package. When you install Big Data Management in an Amazon EMR
environment, you install Big Data Management elements on a local machine to run the Model Repository
Service, Data Integration Service, and other services.
• Install an Informatica instance in the Amazon cloud environment. When you create an implementation of
Big Data Management in the Amazon cloud, you bring online virtual machines where you install and run
Big Data Management.
For more information about installing and configuring Big Data Management on Amazon EMR, see the
Informatica Big Data Management 10.1.1 Installation and Configuration Guide.
• Cloudera Spark 1.6 and Apache Spark 2.0.1 for Cloudera cdh5u8 distribution.
• Apache Spark 2.0.1 for all Hadoop distributions.
Data Analyzer
Effective in version 10.1.1, Informatica dropped support for Data Analyzer. Informatica recommends that you
use a third-party reporting tool to run PowerCenter and Metadata Manager reports. You can use the
recommended SQL queries for building all the reports shipped with earlier versions of PowerCenter.
Operating System
Effective in version 10.1.1, Informatica added support for the following operating systems:
Solaris 11
Windows 10 for Informatica Clients
Analytic Business Dropped Effective in version 10.1.1, Informatica dropped support for the Analytic
Components support Business Components (ABC) functionality. You cannot use objects in the
ABC repository to read and transform SAP data. Informatica will not ship
the ABC transport files.
SAP R/3 version 4.7 Dropped Effective in version 10.1.1, Informatica dropped support for SAP R/3 4.7
support systems.
Upgrade to SAP ECC version 5.0 or later.
Reporting Service
Effective in version 10.1.1, Informatica dropped support for the Reporting Service. Informatica recommends
that you use a third-party reporting tool to run PowerCenter and Metadata Manager reports. You can use the
recommended SQL queries for building all the reports shipped with earlier versions of PowerCenter.
Big Data
This section describes the changes to big data in version 10.1.1.
AES_DECRYPT Returns decrypted data to string format. Supported on the Spark engine.
Previously supported only on the Blaze and
Hive engines.
COMPRESS Compresses data using the zlib 1.2.1 Supported on the Spark engine.
compression algorithm. Previously supported only on the Blaze and
Hive engines.
CRC32 Returns a 32-bit Cyclic Redundancy Check Supported on the Spark engine.
(CRC32) value. Previously supported only on the Blaze and
Hive engines.
DECOMPRESS Decompresses data using the zlib 1.2.1 Supported with restrictions on the Spark
compression algorithm. engine.
Previously supported only on the Blaze and
Hive engines.
DEC_BASE64 Decodes a base 64 encoded value and Supported on the Spark engine.
returns a string with the binary data Previously supported only on the Blaze and
representation of the data. Hive engines.
ENC_BASE64 Encodes data by converting binary data to Supported on the Spark engine.
string data using Multipurpose Internet Mail Previously supported only on the Blaze and
Extensions (MIME) encoding. Hive engines.
MD5 Calculates the checksum of the input value. Supported on the Spark engine.
The function uses Message-Digest algorithm Previously supported only on the Blaze and
5 (MD5). Hive engines.
UUID4 Returns a randomly generated 16-byte binary Supported on the Spark engine without
value that complies with variant 4 of the restrictions.
UUID specification described in RFC 4122. Previously supported on the Blaze engine
without restrictions and on the Spark and Hive
engines with restrictions.
UUID_UNPARSE Converts a 16-byte binary value to a 36- Supported on the Spark engine without
character string representation as specified restrictions.
in RFC 4122. Previously supported on the Blaze engine
without restrictions and on the Spark and Hive
engines with restrictions.
Business Glossary
This section describes the changes to Business Glossary in version 10.1.1
When you export Glossary assets that contain more than 32,767 characters in one Microsoft Excel cell,
the Analyst tool automatically truncates the characters in the cell to a value lesser than 32,763.
Microsoft Excel supports only up to 32,767 characters in a cell. Previously, when you exported a
glossary, Microsoft Excel truncated long text properties that contained more than 32,767 characters in a
cell, causing loss of data without any warning.
For more information about Export and Import, see the "Glossary Administration" chapter in the
Informatica 10.1.1 Business Glossary Guide.
• Cache Directory
• Home Directory
• Maximum Parallelism
• Rejected Files Directory
• Source Directory
• State Store
• Target Directory
• Temporary Directories
Previously, you had to restart the Data Integration Service when you edited these properties.
Previously, the precision was set to 15 and the scale was set to 0.
For more information, see the "Data Type Reference" appendix in the Informatica 10.1.1 Developer Tool Guide.
Informatica Analyst
This section describes changes to the Analyst tool in version 10.1.1.
Profiles
This section describes new Analyst tool features for profiles.
Run-time Environment
Effective in version 10.1.1, after you choose the Hive option as the run-time environment, select a Hadoop
connection to run the profiles.
Previously, after you choose the Hive option as the run-time environment, you selected a Hive connection to
run the profiles.
For more information about run-time environment, see the "Column Profiles in Informatica Analyst" chapter in
the Informatica 10.1.1 Data Discovery Guide.
Informatica Developer
This section describes changes to the Developer tool in version 10.1.1.
Profiles
This section describes new Developer tool features for profiles.
Run-time Environment
Effective in version 10.1.1, after you choose the Hive option as the run-time environment, select a Hadoop
connection to run the profiles.
For more information about run-time environment, see the "Data Object Profiles" chapter in the Informatica
10.1.1 Data Discovery Guide.
Mappings
This section describes changes to mappings in version 10.1.1.
Informatica Mappings
This section describes the changes to the Informatica mappings in version 10.1.1.
• The order of ports in the group or dynamic port of the upstream transformation.
• The order of input rules for the dynamic port.
• The order of ports in the nearest transformation with static ports.
Default is to reorder based on the ports in the upstream transformation.
Previously, you could reorder generated ports based on the order of input rules for the dynamic port.
For more information, see the "Dynamic Mappings" chapter in the Informatica 10.1.1 Developer Mapping
Guide.
Relationships View
Effective in version 10.1.1, you can view business terms, related glossary assets, related technical assets,
and similar columns for the selected asset.
Previously, you could view asset relationships such as columns, data domains, tables, and views.
For more information about relationships view, see the "View Relationships" chapter in the Informatica 10.1.1
Enterprise Information Catalog User Guide.
Incremental loading for Cloudera Navigator resources is disabled by default. Previously, incremental
loading was enabled by default.
When incremental loading is enabled, Metadata Manager performs a full metadata load when the
Cloudera administrator invokes a purge operation in Cloudera Navigator after the last successful
metadata load.
Additionally, there are new guidelines that explain when you might want to disable incremental loading.
You can use the search query to exclude entity types besides HDFS entities from the metadata load. For
example, you can use the search query to exclude YARN or Oozie job executions.
To reduce complexity of the data lineage diagram, Metadata Manager has the following changes:
• Metadata Manager no longer displays data lineage for Hive query template parts. You can run data
lineage analysis on Hive query templates instead.
• For partitioned Hive tables, Metadata Manager displays data lineage links between each column in
the table and the parent directory that contains the related HDFS entities. Previously, Metadata
Manager displayed a data lineage link between each column and each related HDFS entity.
For more information about Cloudera Navigator resources, see the "Database Management Resources"
chapter in the Informatica 10.1.1 Metadata Manager Administrator Guide.
Netezza Resources
Effective in version 10.1.1, Metadata Manager supports multiple schemas for Netezza resources.
• When you create or edit a Netezza resource, you select the schemas from which to extract metadata. You
can select one or multiple schemas.
• Metadata Manager organizes Netezza objects in the metadata catalog by schema. The database does not
appear in the metadata catalog.
• When you configure connection assignments to Netezza, you select the schema to which you want to
assign the connection.
Because of these changes, Netezza resources behave like other types of relational resources.
Previously, when you created or edited a Netezza resource, you could not select the schemas from which to
extract metadata. If you created a resource from a Netezza database that included multiple schemas,
Metadata Manager ignored the schema information. Metadata Manager organized Netezza objects in the
metadata catalog by database. When you configured connection assignments to Netezza, you selected the
database to which to assign the connection.
PowerExchange Adapters
This section describes changes to PowerExchange adapters in version 10.1.1.
For more information, see the Informatica 10.1.1 PowerExchange for Hive User Guide.
For more information, see the Informatica 10.1.1 PowerExchange for Tableau User Guide.
For more information, see the Informatica 10.1.1 PowerExchange for Essbase User Guide for PowerCenter.
For more information, see the Informatica 10.1.1 PowerExchange for Greenplum User Guide for PowerCenter.
For more information, see the Informatica 10.1.1 PowerExchange for Microsoft Dynamics CRM User Guide for
PowerCenter.
For more information, see the Informatica 10.1.1 PowerExchange for Tableau User Guide for PowerCenter.
Transformations
This section describes changed transformation behavior in version 10.1.1.
InformaticaTransformations
This section describes the changes to the Informatica transformations in version 10.1.1.
For more information, see the Informatica 10.1.1 Developer Transformation Guide and the Informatica 10.1.1
Address Validator Port Reference.
Workflows
This section describes changed workflow behavior in version 10.1.1.
Informatica Workflows
This section describes the changes to Informatica workflow behavior in version 10.1.1.
Transformations 129
Nested Inclusive Gateways
Effective in version 10.1.1, you can add one or more pairs of gateways to a sequence flow between two
Inclusive gateways or two Exclusive gateways.
Previously, you invalidated the workflow if you added a pair of gateways to a sequence flow between two
Inclusive gateways.
For more information, see the Informatica 10.1.1 Developer Workflow Guide.
Documentation
This section describes documentation changes in version 10.1.1.
Metadata Manager
This section describes release tasks for Metadata Manager in version 10.1.1.
Update the value of the Multiple Threads property for the following resources:
• Business Objects
• Cognos
• Oracle Business Intelligence Enterprise Edition
• Tableau
The Multiple Threads configuration property controls the number of worker threads that the Metadata
Manager Agent uses to extract metadata asynchronously. If you do not update the Multiple Threads property
after upgrade, the Metadata Manager Agent calculates the number of worker threads. The Metadata Manager
Agent allocates between one and six threads based on the JVM architecture and the number of available CPU
cores on the machine that runs the Metadata Manager Agent.
For more information about the Multiple Threads configuration property, see the "Business Intelligence
Resources" chapter in the Informatica 10.1.1 Metadata Manager Administrator Guide.
Set the Java heap size for the Cloudera Navigator Server to at least 2 GB. If the heap size is not sufficient, the
resource load fails with a connection refused error.
131
Set the maximum heap size for the Metadata Manager Service to at least 4 GB. If you perform simultaneous
resource loads, increase the maximum heap size by at least 1 GB for each resource load. For example, to
load two Cloudera Navigator resources simultaneously, increase the maximum heap size by 2 GB. Therefore,
you would set the Max Heap Size property for the Metadata Manager Service to at least 6144 MB (6 GB). If
the maximum heap size is not sufficient, the load fails with an out of memory error.
For more information about Cloudera Navigator resources, see the "Database Management Resources"
chapter in the Informatica 10.1.1 Metadata Manager Administrator Guide.
Tableau Resources
Effective in version 10.1.1, the Tableau model has minor changes. Therefore, you must purge and reload
Tableau resources after you upgrade.
For more information about Tableau resources, see the "Business Intelligence Resources" chapter in the
Informatica 10.1.1 Metadata Manager Administrator Guide.
133
Chapter 12
A data lake is a shared repository of raw and enterprise data from a variety of sources. It is often built over a
distributed Hadoop cluster, which provides an economical and scalable persistence and compute layer.
Hadoop makes it possible to store large volumes of structured and unstructured data from various enterprise
systems within and outside the organization. Data in the lake can include raw and refined data, master data
and transactional data, log files, and machine data.
Organizations are also looking to provide ways for different kinds of users to access and work with all of the
data in the enterprise, within the Hadoop data lake as well data outside the data lake. They want data
analysts and data scientists to be able to use the data lake for ad-hoc self-service analytics to drive business
innovation, without exposing the complexity of underlying technologies or the need for coding skills. IT and
data governance staff want to monitor data related user activities in the enterprise. Without strong data
management and governance foundation enabled by intelligence, data lakes can turn into data swamps.
In version 10.1, Informatica introduces Intelligent Data Lake, a new product to help customers derive more
value from their Hadoop-based data lake and make data available to all users in the organization.
Intelligent Data Lake is a collaborative self-service big data discovery and preparation solution for data
analysts and data scientists. It enables analysts to rapidly discover and turn raw data into insight and allows
IT to ensure quality, visibility, and governance. With Intelligent Data Lake, analysts to spend more time on
analysis and less time on finding and preparing data.
• Data analysts can quickly and easily find and explore trusted data assets within the data lake and outside
the data lake using semantic search and smart recommendations.
• Data analysts can transform, cleanse, and enrich data in the data lake using an Excel-like spreadsheet
interface in a self-service manner without the need for coding skills.
• Data analysts can publish data and share knowledge with the rest of the community and analyze the data
using their choice of BI or analytic tools.
134
• IT and governance staff can monitor user activity related to data usage in the lake.
• IT can track data lineage to verify that data is coming from the right sources and going to the right
targets.
• IT can enforce appropriate security and governance on the data lake
• IT can operationalize the work done by data analysts into a data delivery process that can be repeated and
scheduled.
• Find the data in the lake as well as in the other enterprise systems using smart search and inference-
based results.
• Filter assets based on dynamic facets using system attributes and custom defined classifications.
Explore
• Get an overview of assets, including custom attributes, profiling statistics for data quality, data
domains for business content, and usage information.
• Add business context information by crowd-sourcing metadata enrichment and tagging.
• Preview sample data to get a sense of the data asset based on user credentials.
• Get lineage of assets to understand where data is coming from and where it is going and to build
trust in the data.
• Know how the data asset is related to other assets in the enterprise based on associations with other
tables or views, users, reports and data domains.
• Progressively discover additional assets with lineage and relationship views.
Acquire
Collaborate
Recommendations
• Improve productivity by using recommendations based on the behavior and shared knowledge of
other users.
• Get recommendations for alternate assets that can be used in a project.
• Get recommendations for additional assets that can be used a project.
• Recommendations change based on what is in the project.
Prepare
Publish
• Use the power of the underlying Hadoop system to run large-scale data transformation without
coding or scripting.
• Run data preparation steps on actual large data sets in the lake to create new data assets.
• Publish the data in the lake as a Hive table in the desired database.
• Create, append, or overwrite assets for published data.
IT Monitoring
• Keep track of user, data asset and project activities by building reports on top of the audit database.
• Find information such as the top active users, the top datasets by size, prior updates, most reused
assets, and the most active projects.
IT Operationalization
For more information, see the Informatica PowerExchange for Amazon Redshift 10.1 User Guide.
For more information, see the Informatica PowerExchange for Microsoft Azure Blob Storage 10.1 User Guide.
For more information, see the Informatica PowerExchange for Microsoft Azure SQL Data Warehouse 10.1 User
Guide.
Application Services
This section describes new application services features in version 10.1.
138
System Services
This section describes new system service features in version 10.1.
For more information about schedules, see the "Schedules" chapter in the Informatica 10.1 Administrator
Guide.
For more information about schedules, see the "Schedules" chapter in the Informatica 10.1 Administrator
Guide.
Big Data
This section describes new big data features in version 10.1.
Hadoop Ecosystem
Support in Big Data Management 10.1
Effective in version 10.1, Informatica supports the following updated versions of Hadoop distrbutions:
For the full list of Hadoop distributions that Big Data Management 10.1 supports, see the Informatica Big
Data Management 10.1 Installation and Configuration Guide.
• Apache Knox
• Apache Ranger
• Apache Sentry
• HDFS Transparent Encryption
Limitations apply to some combinations of security system and Hadoop distribution platform. For more
information on Informatica support for these technologies, see the Informatica Big Data Management 10.1
Security Guide.
Spark is an Apache project with a run-time engine that can run mappings on the Hadoop cluster. Configure
the Hadoop connection properties specific to the Spark engine. After you create the mapping, you can
validate it and view the execution plan in the same way as the Blaze and Hive engines.
When you push mapping logic to the Spark engine, the Data Integration Service generates a Scala program
and packages it into an application. It sends the application to the Spark executor that submits it to the
Resource Manager on the Hadoop cluster. The Resource Manager identifies resources to run the application.
You can monitor the job in the Administrator tool.
For more information about using Spark to run mappings, see the Informatica Big Data Management 10.1 User
Guide.
To use Sqoop, you must configure Sqoop properties in a JDBC connection and run the mapping in the
Hadoop environment. You can configure Sqoop connectivity for relational data objects, customized data
objects, and logical data objects that are based on a JDBC-compliant database. For example, you can
configure Sqoop connectivity for the following databases:
• Aurora
• IBM DB2
• IBM DB2 for z/OS
• Greenplum
• Microsoft SQL Server
• Netezza
• Oracle
• Teradata
You can also run a profile on data objects that use Sqoop in the Hive run-time environment.
For more information, see the Informatica 10.1 Big Data Management User Guide.
• Address Validator
• Case Converter
• Comparison
• Consolidation
• Data Processor
• Decision
• Key Generator
• Labeler
Effective in version 10.1, the following transformations have additional support on the Blaze engine:
Business Glossary
This section describes new Business Glossary features in version 10.1.
For more information, see the "Glossary Content Management" chapter in the Informatica 10.1 Business
Glossary Guide.
For more information, see the "Finding Glossary Content" chapter in the Informatica 10.1 Business Glossary
Guide.
For more information, see the "Glossary Administration" chapter in the Informatica 10.1 Business Glossary
Guide.
For example, enter the following syntax in the metadata connection string URL:
jdbc:informati[Link]//<host name>:<port>;DatabaseName=<database
name>;ischemaname=<schema_name1>|<schema_name2>|<schema_name3>
For more information, see the Informatica 10.1 Developer Tool Guide and Informatica 10.1 Analyst Tool Guide.
infacmd bg Commands
The following table describes new infacmd bg commands:
Command Description
importGlossary Imports business glossaries from .xlsx or .zip files that were exported from the Analyst tool.
Command Description
ListApplicationPermissions Lists the permissions that a user or group has for an application.
ListApplicationObjectPermissions Lists the permissions that a user or group has for an application object such as
mapping or workflow.
Command Description
BackupData Backs up HDFS data in the internal Hadoop cluster to a .zip file.
removeSnapshot Removes existing HDFS snapshots so that you can run the infacmd ihs BackupData
command successfully to back up HDFS data.
For more information, see the "infacmd ihs Command Reference" chapter in the Informatica 10.1 Command
Reference.
Command Description
ListDefaultOSProfiles Lists the default operating system profiles for a user or group.
ListDomainCiphers Displays one or more of the following cipher suite lists used by the Informatica domain or
a gateway node:
Black list
Default list
Effective list
The list of cipher suites that the Informatica domain uses after you configure it with
the infasetup updateDomainCiphers command. The effective list supports cipher
suites in the default list and white list but blocks cipher suites in the black list.
White list
User-specified list of cipher suites that the Informatica domain can use in addition to
the default list.
You can specify which lists that you want to display.
UnassignDefaultOSProfile Removes the default operating system profile that is assigned to a user or group.
Command Description
For more information, see the "infacmd isp Command Reference" chapter in the Informatica 10.1 Command
Reference.
Command Description
backupData Takes a snapshot of the HDFS directory and creates a .zip file of the snapshot in the local
machine.
restoreData Retrieves the HDFS data backup .zip file from the local system and restores data in the HDFS
directory.
For more information, see the "infacmd ldm Command Reference" chapter in the Informatica 10.1 Command
Reference.
Command Description
For more information, see the "infacmd ms Command Reference" chapter in the Informatica 10.1 Command
Reference.
infacmd ps Commands
The following table describes new options for infacmd ps commands:
Command Description
For more information, see the "infacmd ps Command Reference" chapter in the Informatica 10.1 Command
Reference.
Command Description
For more information, see the "infacmd sch Command Reference" chapter in the Informatica 10.1 Command
Reference.
Command Description
ListDomainCiphers Displays one or more of the following cipher suite lists used by the Informatica domain or a
gateway node uses:
Black list
Default list
Effective list
The list of cipher suites that the Informatica domain uses after you configure it with the
infasetup updateDomainCiphers command. The effective list supports cipher suites in the
default list and white list but blocks cipher suites in the black list.
White list
User-specified list of cipher suites that the Informatica domain can use.
You can specify which lists that you want to display.
updateDomainCiphers Updates the cipher suites that the Informatica domain can use with a new effective list.
Command Description
For more information, see the "infasetup Command Reference" chapter in the Informatica 10.1 Command
Reference.
pmrep Commands
The following table describes a new pmrep command:
Command Description
Command Description
For more information, see the "pmrep Command Reference" chapter in the Informatica 10.1 Command
Reference.
Documentation
This section describes new or updated guides with the Informatica documentation in version 10.1.
Effective in version 10.1, the Metadata Manager Command Reference contains information about all of
the Metadata Manager command line programs. The Metadata Manager Command Reference is included
in the online help for Metadata Manager. Previously, information about the Metadata Manager command
line programs was included in the Metadata Manager Administrator Guide.
For more information, see the Informatica 10.1 Metadata Manager Command Reference.
Effective in Live Data Map version 2.0, the Informatica Administrator Reference for Live Data Map
contains basic reference information on Informatica Administrator tasks that you need to perform in Live
Data Map. The Informatica Administrator Reference for Live Data Map is included in the online help for
Informatica Administrator.
For more information, see the Informatica 2.0 Administrator Reference for Live Data Map.
Exception Management
This section describes new exception management features in version 10.1.
Effective in version 10.1, you can configure the options in an exception task to search and replace data
values based on the data type. You can configure the options to search and replace data in any column
that contains date, string, or numeric data.
When you specify a data type, the Analyst tool searches for the value that you enter in any column that
uses the data type. You can find and replace any value that a string data column contains. You can
perform case-sensitive searches on string data. You can search for a partial match or a complete match
between the search value and the contents of a field in a string data column.
For more information, see the Exception Records chapter in the Informatica 10.1 Exception Management
Guide.
Documentation 147
Informatica Administrator
This section describes new Administrator tool features in version 10.1.
Domain View
Effective in 10.1, you can view historical statistics for CPU usage and memory usage in the domain.
You can view the CPU and memory statistics for usage for the last 60 minutes. You can toggle between the
current statistics and the last 60 minutes. In the Domain view choose Actions > Current or Actions > Last
Hour Trend in the CPU Usage panel or the Memory Usage panel.
Monitoring
Effective in version 10.1, the Monitor tab in the Administrator tool has the following features:
The Summary Statistics view has a Details view. You can view information about jobs, export the list to
a .csv file, and link to a job in the Execution Statistics view. To access the Details view, click View
Details.
When you select an Ad Hoc or a deployed mapping job in the Contents panel of the Monitor tab, the
Details panel contains the Historical Statistics view. The Historical Statistics view shows averaged data
from multiple runs for a specific job. For example, you can view the minimum, maximum, and average
duration of the mapping job. You can view the average amount of CPU that the job consumes when it
runs.
Informatica Analyst
This section describes new Analyst tool features in version 10.1.
Profiles
This section describes new Analyst tool features for profiles and scorecards.
Conformance Criteria
Effective in version 10.1, you can select a minimum number of conforming rows as conformance criteria for
data domain discovery.
For more information about conformance criteria, see the "Data Domain Discovery in Informatica Analyst"
chapter in the Informatica 10.1 Data Discovery Guide.
For more information about exclude null values from data domain discovery option, see the "Data Domain
Discovery in Informatica Analyst" chapter in the Informatica 10.1 Data Discovery Guide.
For more information about run-time environment, see the "Data Object Profiles" chapter in the Informatica
10.1 Data Discovery Guide.
Scorecard Dashboard
Effective in version 10.1, you can view the following scorecard details in the scorecard dashboard:
Informatica Developer
This section describes new Informatica Developer features in version 10.1.
For more information, see the Informatica 10.1 Developer Tool Guide.
For more information, see the Informatica 10.1 Developer Mapping Guide.
DDL Query
Effective in version 10.1, when you choose to create or replace the target at run time, you can define a DDL
query based on which the Data Integration Service must create or replace the target table at run time. You
can define a DDL query for relational and Hive targets.
You can enter placeholders in the DDL query. The Data Integration Service substitutes the placeholders with
the actual values at run time. For example, if a table contains 50 columns, instead of entering all the column
names in the DDL query, you can enter a placeholder.
• INFA_TABLE_NAME
• INFA_COLUMN_LIST
• INFA_PORT_SELECTOR
You can also enter parameters in the DDL query.
For more information, see the Informatica 10.1 Developer Mapping Guide.
Profiles
This section describes new Developer tool features for profiles and scorecards.
For more information about column profiles on Avro and Parquet data sources, see the "Column Profiles on
Semi-structured Data Sources" chapter in the Informatica 10.1 Data Discovery Guide.
Conformance Criteria
Effective in version 10.1, you can select a minimum number of conforming rows as conformance criteria for
data domain discovery.
For more information about conformance criteria, see the "Data Domain Discovery in Informatica Developer"
chapter in the Informatica 10.1 Data Discovery Guide.
For more information about exclude null values from data domain discovery option, see the "Data Domain
Discovery in Informatica Developer" chapter in the Informatica 10.1 Data Discovery Guide.
Run-time Environment
Effective in version 10.1, you can choose the Hadoop option as the run-time environment when you create or
edit a column profile, data domain discovery profile, enterprise discovery profile, or scorecard. When you
For more information about run-time environment, see the "Data Object Profiles" chapter in the Informatica
10.1 Data Discovery Guide.
When you create a connector that uses REST APIs to connect to the data source, you can use pre-
defined data types. You can use the following Informatica Platform data types:
• string
• integer
• bigInteger
• decimal
• double
• binary
• date
Procedure pattern
When you create a connector for Informatica Cloud, you can define native metadata objects for
procedures in data sources. You can use the following options to define the native metadata object for a
procedure:
Manually create the native metadata object
When you define the native metadata objects manually, you can specify the following details:
Procedure extension Additional metadata information that you can specify for a procedure.
Parameter extension Additional metadata information that you can specify for parameters.
Call capability attributes Additional metadata information that you can specify to create a read or write
call to a procedure.
When you use swagger specifications to define the native metadata object, you can either use an
existing swagger specification or you can generate a swagger specification by sampling the REST
end point.
You can specify common metadata information for Informatica Cloud connectors, such as schema
name and foreign key name.
After you design and implement the connector components, you can export the connector files for
Informatica Cloud by specifying the plug-in ID and plug-in version.
After you design and implement the connector components, you can export the connector files for
PowerCenter by specifying the PowerCenter version.
Email Notifications
Effective in version 10.1, you can configure and receive email notifications on the Catalog Service status to
closely monitor and troubleshoot the application service issues. You use the Email Service and the
associated Model Repository Service to send email notifications.
For more information, see the Informatica 10.1 Administrator Reference for Live Data Map.
Keyword Search
Effective in version 10.1, you can use the following keywords to restrict the search results to specific types of
assets:
• Table
• Column
• File
• Report
For example, if you want to search for all the tables with the term "customer" in them, type in "tables with
customer" in the Search box. Enterprise Information Catalog lists all the tables that include the search term
"customer" in the table name.
For more information, see the Informatica 10.1 Enterprise Information Catalog User Guide.
Profiling
Effective in version 10.1, Live Data Map can run profiles in the Hadoop environment. When you choose the
Hadoop connection, the Data Integration Service pushes the profile logic to the Blaze engine on the Hadoop
cluster to run profiles.
For more information, see the Informatica 10.1 Live Data Map Administrator Guide.
Scanners
Effective in version 10.1, you can extract metadata from the following sources:
• Amazon Redshift
• Amazon S3
Mappings
This section describes new mapping features in version 10.1.
Informatica Mappings
This section describes new features for Informatica mappings in version 10.1.
To generate a mapping or logical data object from an SQL query, click File > New > Mapping from SQL Query.
Enter a SQL query or select the location of the text file with an SQL query that you want to convert to a
mapping. You can also generate a logical data object from an SQL query that contains only SELECT
statements.
For more information about generating a mapping or a logical data object from an SQL query, see the
Informatica 10.1 Developer Mapping Guide.
Metadata Manager
This section describes new Metadata Manager features in version 10.1.
Universal Resources
Effective in version 10.1, you can create universal resources to extract metadata from some metadata
sources for which Metadata Manager does not package a model. For example, you can create a universal
resource to extract metadata from an Apache Hadoop Hive Server, QlikView, or Talend metadata source.
To extract metadata from these sources, you first create an XConnect that represents the metadata source
type. The XConnect includes the model for the metadata source. You then create one or more resources that
For more information about universal resources, see the "Universal Resources" chapter in the Informatica
10.1 Metadata Manager Administrator Guide.
To enable incremental loading for an Oracle resource or for a Teradata resource, enable Incremental load
option in the resource configuration properties. This option is disabled by default.
For more information about incremental loading for Oracle and Teradata resources, see the "Database
Management Resources" chapter in the Informatica 10.1 Metadata Manager Administrator Guide.
You can hide objects such as staging databases from data lineage diagrams. If you want to view the hidden
objects, you can switch from the summary view to the detail view through the task bar.
For more information about the summary view of data lineage diagrams, see the "Working with Data Lineage"
chapter in the Informatica 10.1 Metadata Manager User Guide.
To create a resource that extracts metadata from packages in different package files, specify the directory
that contains the package files in the Directory resource configuration property.
For more information about creating and configuring Microsoft SQL Server Integration Services resources,
see the "Database Management Resources" chapter in the Informatica 10.1.1 Metadata Manager
Administrator Guide.
For more information about the mmXConPluginUtil command line program, see the "mmXConPluginUtil"
chapter in the Informatica 10.1 Metadata Manager Command Reference.
Application Properties
Effective in version 10.1 you can configure new application properties in the Metadata Manager
[Link] file. This feature is also available in 9.6.1 HotFix 4. It is not available in 10.0.
The following table describes new Metadata Manager application properties in [Link]:
Property Description
[Link] Maximum number of errors that the Metadata Manager Service can
encounter before the custom resource load fails.
[Link] Number of errors that the Metadata Manager Service writes to the in
memory cache and to the [Link] file in one batch when you load a custom
resource.
For more information about the [Link] file, see the "Metadata Manager Properties Files" appendix in
the Informatica 10.1 Metadata Manager Administrator Guide.
For more information, see the Informatica 10.1 Upgrading from Version 9.5.1 Guide.
For more information, see the Informatica 10.1 PowerCenter Designer Guide.
For more information, see the "pmrep Command Reference" chapter in the Informatica 10.1 Command
Reference.
For more information, see the Informatica PowerCenter 10.1 Advanced Workflow Guide.
PowerExchange Adapters
This section describes new PowerExchange adapter features in version 10.1.
For more information, see the Informatica PowerExchange for HDFS 10.1 User Guide.
For more information, see the Informatica PowerExchange for Hive 10.1 User Guide.
For more information, see the Informatica PowerExchange for Teradata Parallel Transporter API 10.1 User
Guide.
PowerCenter 157
PowerExchange Adapters for PowerCenter
This section describes new PowerCenter adapter features in version 10.1.
For more information, see the "Greenplum Sessions and Workflows" chapter in the Informatica 10.1
PowerExchange for Greenplum User Guide for PowerCenter.
Security
This section describes new security features in version 10.1.
The Informatica domain uses an effective list of cipher suites that uses the cipher suites in the default and
whitelists but blocks cipher suites in the blacklist.
For more information, see the "Domain Security" chapter in the Informatica 10.1 Security Guide.
The Data Integration Service uses operating system profiles to run mappings, profiles, scorecards, and
workflows. The operating system profile contains the operating system user name, service process variables,
Hadoop impersonation properties, the Analyst Service properties, environment variables, and permissions.
The Data Integration Service runs the mapping, profile, scorecard, or workflow with the system permissions
of the operating system user and the properties defined in the operating system profile.
For more information about operating system profiles, see the "Users and Groups" chapter in the Informatica
10.1 Security Guide.
For more information about application and application object permissions, see the "Permissions" chapter in
the Informatica 10.1 Security Guide.
Informatica Transformations
This section describes new features in Informatica transformation in version 10.1.
The Address Validator transformation contains additional address functionality for the following countries:
Ireland
Effective in version 10.1, you can return the eircode for an address in Ireland. An eircode is a seven-
character code that uniquely identifies an Ireland address. The eircode system covers all residences,
public buildings, and business premises and includes apartment addresses and addresses in rural
townlands.
To return the eircode for an address, select a Postcode port or a Postcode Complete port.
France
Effective in version 10.1, address validation uses the Hexaligne 3 repository of the National Address
Management Service to certify a France address to the SNA standard.
The Hexaligne 3 data set contains additional information on delivery point addresses, including sub-
building details such as building names and residence names.
Germany
Effective in version 10.1, you can retrieve the three-digit street code part of the Frachtleitcode or Freight
Code as an enrichment to a valid Germany addresses. The street code identifies the street within the
address.
To retrieve the street code as an enrichment to verified Germany addresses, select the Street Code DE
port. Find the port in the DE Supplementary port group.
South Korea
Effective in version 10.1, you can verify older, lot-based addresses and addresses with older, six-digit
post codes in South Korea. You can verify and update addresses that use the current format, the older
format, and a combination of the current and older formats. A current South Korea address has a street-
based format and includes a five-digit post code. A non-current address has a lot-based format and
includes a six-digit post code.
To verify a South Korea address in an older format and to change the information to another format, use
the Address Identifier KR ports. You update the address information in two stages. First, run the address
validation mapping in batch or interactive mode and select the Address Identifier KR output port. Then,
run the address validation mapping in address code lookup mode and select the Address Identifier KR
input port. Find the Address Identifier KR input port in the Discrete port group. Find the Address Identifier
KR output port in the KR Supplementary port group.
To verify that the Address Validator transformation can read and write the address data, add the
Supplementary KR Status port to the transformation.
Informatica adds the Address Identifier KR ports, the Supplementary KR Status port, and the KR
Supplementary port group in version 10.1.
Transformations 159
Effective in version 10.1, you can retrieve South Korea address data in the Hangul script and in a Latin
script.
United Kingdom
Effective in version 10.1, you can retrieve delivery point type data and organization key data for a United
Kingdom address. The delivery point type is a single-character code that indicates whether the address
points to a residence, a small organization, or a large organization. The organization key is an eight-digit
code that the Royal Mail assigns to small organizations.
To add the delivery point type to a United Kingdom address, use the Delivery Point Type GB port. To add
the organization key to a United Kingdom address, use the Organization Key GB port. Find the ports in
the UK Supplementary port group. To verify that the Address Validator transformation can read and write
the data, add the Supplementary UK Status port to the transformation.
Informatica adds the Delivery Point Type GB port and the Organization Key GB port in version 10.1.
These features are also available in 9.6.1 HotFix 4. They are not available in 10.0.
For more information, see the Informatica 10.1 Address Validator Port Reference.
REST API
An application can call the Data Transformation REST API to run a Data Transformation service.
For more information, see the Informatica 10.1 Data Transformation REST API User Guide.
For more information, see the Informatica10.1 Data Transformation User Guide.
The Relational to Hierarchical transformation is an optimized transformation introduced in version 10.1 that
converts relational input to hierarchical output.
For more information, see the Informatica 10.1 Developer Transformation Guide.
Workflows
This section describes new workflow features in version 10.1.
PowerCenter Workflows
This section describes new features in PowerCenter workflows in version 10.1.
For more information, see the "pmrep Command Reference" chapter in the Informatica 10.1 Command
Reference.
Workflows 161
Chapter 14
Changes (10.1)
This chapter includes the following topics:
Support Changes
Effective in version 10.1, Informatica announces the following support changes:
Informatica Installation
Effective in version 10.1, Informatica implemented the following change in operating system:
SUSE 11 Added support Effective in version 10.1, Informatica added support for SUSE Linux Enterprise
Server 11.
If you upgrade to version 10.1, you can continue to use the Reporting Service. You can continue to use Data
Analyzer. Informatica recommends that you begin using a third-party reporting tool before Informatica drops
162
support. You can use the recommended SQL queries for building all the reports shipped with earlier versions
of PowerCenter.
If you install version 10.1, you cannot create a Reporting Service. You cannot use Data Analyzer. You must
use a third-party reporting tool to run PowerCenter and Metadata Manager reports.
For information about the PowerCenter Reports, see the Informatica PowerCenter Using PowerCenter Reports
Guide. For information about the PowerCenter repository views, see the Informatica PowerCenter Repository
Guide. For information about the Metadata Manager repository views, see the Informatica Metadata Manager
View Reference.
If you upgrade to version 10.1, you can continue to use the Reporting and Dashboards Service. Informatica
recommends that you begin using a third-party reporting tool before Informatica drops support. You can use
the recommended SQL queries for building all the reports shipped with earlier versions of PowerCenter.
If you install version 10.1, you cannot create a Reporting and Dashboards Service. You must use a third-party
reporting tool to run PowerCenter and Metadata Manager reports.
For information about the PowerCenter Reports, see the Informatica PowerCenter Using PowerCenter Reports
Guide. For information about the PowerCenter repository views, see the Informatica PowerCenter Repository
Guide. For information about the Metadata Manager repository views, see the Informatica Metadata Manager
View Reference.
Application Services
This section describes changes to application services in version 10.1
System Services
This section describes changes to system services in version 10.1.
Previously, scorecard notifications used the email server that you configured on the domain.
For more information about the Email Service, see the "System Services" chapter in the Informatica 10.1
Application Service Guide.
Previously, you had to download and manually install the JCE policy file for AES encryption.
Business Glossary
This section describes changes to Business Glossary in version 10.1.
Custom Relationships
Effective in version 10.1, you can create custom relationships in the Manage Glossary Relationships
workspace. Under Manage click Glossary Relationships to open the Manage Glossary Relationships
workspace.
Previously, you had to edit the glossary template to create custom relationships.
For more information, see the "Glossary Administration" chapter in the Informatica 10.1 Business Glossary
Guide.
For more information, see the "Finding Glossary Content" chapter in the Informatica 10.1 Business Glossary
Guide.
Governed By Relationship
Effective in version 10.1, you can no longer create a "governed by" relationship between terns. The "governed
by" relationship can only be used between a policy and a term.
For more information, see the Informatica 10.1 Business Glossary Guide.
Glossary Workspace
Effective in version 10.1, in the Glossary workspace, the Analyst tool displays multiple Glossary assets in
separate tabs.
Previously, the Analyst tool displayed only one Glossary asset in the Glossary workspace.
For more information, see the "Finding Glossary Content" chapter in the Informatica 10.1 Business Glossary
Guide.
For more information, see the Informatica 10.1 Business Glossary Desktop Installation and Configuration
Guide.
Previously, Business Glossary command program was not supported in a domain that uses Kerberos
authentication.
For more information, see the "infacmd bg Command Reference" chapter in the Informatica 10.1 Command
Reference.
Command Description
BackupDARepositoryCont Backs up content for a Data Analyzer repository to a binary file. When you back up the
ents content, the Reporting Service saves the Data Analyzer repository including the
repository objects, connection information, and code page information.
CreateDARepositoryConte Creates content for a Data Analyzer repository. You add repository content when you
nts create the Reporting Service or delete the repository content. You cannot create content
for a repository that already includes content.
DeleteDARepositoryConte Deletes repository content from a Data Analyzer repository. When you delete repository
nts content, you also delete all privileges and roles assigned to users for the Reporting
Service.
RestoreDARepositoryCont Restores content for a Data Analyzer repository from a binary file. You can restore
ents metadata from a repository backup file to a database. If you restore the backup file on
an existing database, you overwrite the existing content.
UpdateReportingService Updates or creates the service and lineage options for the Reporting Service.
UpgradeDARepositoryUser Upgrades users and groups in a Data Analyzer repository. When you upgrade the users
s and groups in the Data Analyzer repository, the Service Manager moves them to the
Informatica domain.
For more information, see the "infacmd isp Command Reference" chapter in the Informatica 10.1 Command
Reference.
Exception Management
This section describes the changes to exception management in version 10.1.
Effective in version 10.1, you can configure the options in an exception task to find and replace data
values in one or more columns. You can specify a single column, or you can specify any column that
uses a string, date, or numeric data type. By default, a find and replace operation applies to all columns
that contain string data.
Previously, a find and replace operation ran by default on all of the data in the task. In version 10.1, you
cannot configure a find and replace operation to run on all of the data in the task.
For more information, see the Exception Records chapter in the Informatica 10.1 Exception Management
Guide.
Informatica Developer
This section describes the changes to the Developer tool in version 10.1.
Keyboard Shortcuts
Effective in version 10.1, the shortcut key to select the next area is CTRL + Tab followed by pressing the Tab
button three times.
For more information, see the "Keyboard Shortcuts" appendix in the Informatica 10.1.1 Developer Tool Guide.
Home Page
Effective in version 10.1, the home page displays the trending search, top 50 assets, and recently viewed
assets. Trending search refers to the terms that were searched the most in the catalog in the last week. The
top 50 assets refer to the assets with the most number of relationships with other assets in the catalog.
Previously, the Enterprise Information Catalog home page displayed the search field, the number of resources
that Live Data Map scanned metadata from, and the total number of assets in the catalog.
For more information about the Enterprise Information Catalog home page, see the "Getting Started with
Informatica Enterprise Information Catalog" chapter in the Informatica 10.1 Enterprise Information Catalog
User Guide.
Asset Overview
Effective in version 10.1, you can view the schema name associated with an asset in the Overview tab.
Previously, the Overview tab for an asset did not display the associated schema name.
For more information about assets in Enterprise Information Catalog, see the Informatica 10.1 Enterprise
Information Catalog User Guide.
Previously, the Live Data Map Administrator home page displayed several monitoring statistics, such as
number of resources for each resource type, task distribution, and predictive job load.
For more information about Live Data Map Administrator home page, see the "Using Live Data Map
Administrator" chapter in the Informatica 10.1 Live Data Map Administrator Guide.
Metadata Manager
This section describes changes to Metadata Manager in version 10.1.
Previously, Metadata Manager organized SQL Server Integration Services objects by connection and by
package. The metadata catalog contained a Connections folder in addition to a folder for each package.
For more information about SQL Server Integration Services resources, see the "Data Integration Resources"
chapter in the Informatica 10.1 Metadata Manager Administrator Guide.
Because the command line programs no longer accept security certificates that have errors, the
[Link] property is obsolete. The property no longer appears in the
[Link] files for mmcmd or mmRepoCmd.
For more information about certificate validation for mmcmd and mmRepoCmd, see the "Metadata Manager
Command Line Programs" chapter in the Informatica 10.1 Metadata Manager Administrator Guide.
PowerCenter
This section describes changes to PowerCenter in version 10.1.
For more information about managing operating system profiles, see the "Users and Groups" chapter in the
Informatica 10.1 Security Guide.
Security
This section describes changes to security in version 10.1.
The changes affect secure communication within the Informatica domain, secure connections to web
application services, and connections from the Informatica domain to an external destination.
Permissions
Effective in version 10.1, the following Model repository objects have permission changes:
• Applications, mappings, and workflows. All users in the domain are granted all permissions.
• SQL data services and web services. Users with effective permissions are assigned direct permissions.
After you upgrade, you might need to review and change the permissions to ensure that users have
appropriate permissions on objects.
For more information, see the "Permissions" chapter in the Informatica 10.1 Security Guide.
Transformations
This section describes changed transformation behavior in version 10.1.
Informatica Transformations
This section describes the changes to the Informatica transformations in version 10.1.
The Address Validator transformation contains the following updates to address functionality:
Effective in version 10.1, the Address Validator transformation uses version 5.8.1 of the Informatica
Address Verification software engine. The engine enables the features that Informatica adds to the
Address Validator transformation in version 10.1.
Previously, the transformation used version 5.7.0 of the Informatica AddressDoctor software engine.
Effective in version 10.1, you can select Rooftop as a geocode data property to retrieve rooftop-level
geocodes for United Kingdom addresses.
Previously, you selected the Arrival Point geocode data property to retrieve rooftop-level geocodes for
United Kingdom addresses.
If you upgrade a repository that includes an Address Validator transformation, you do not need to
reconfigure the transformation to specify the Rooftop geocode property. If you specify rooftop geocodes
and the Address Validator transformation cannot return the geocodes for an address, the transformation
does not return any geocode data.
Support for unique property reference numbers in United Kingdom input data
Effective in version 10.1, the Address Validator transformation has a UPRN GB input port and a UPRN GB
output port.
Use the input port to retrieve a United Kingdom address for a unique property reference number that you
enter. Use the UPRN GB output port to retrieve the unique property reference number for a United
Kingdom address.
These features are also available in 9.6.1 HotFix 4. They are not available in 10.0.
For more information, see the Informatica 10.1 Address Validator Port Reference.
Transformations 169
Data Processor Transformation
This section describes the changes to the Data Processor transformation.
Excel 2013
Effective in version 10.1, the ExcelToXml_03_07_10 document processor can process Excel 2013 files. You
can use the document processor in a Data Processor transformation as a pre-processor that converts the
format of a source document before a transformation.
For more information, see the Informatica 10.1 Data Transformation User Guide.
For more information, see the Informatica10.1 Data Transformation User Guide.
For more information, see the Informatica10.1 Data Transformation User Guide.
Exception Transformations
Effective in version 10.1, you can configure a Bad Record Exception transformation and a Duplicate Record
Exception transformation to create exception tables in a non-default database schema.
Previously, you configured the transformations to create exception tables in the default schema on the
database.
For more information, see the Informatica 10.1 Developer Transformation Guide.
Workflows
This section describes changed workflow behavior in version 10.1.
Informatica Workflows
This section describes the changes to Informatica workflow behavior in version 10.1.
Previously, you added one or more Human tasks to a single sequence flow between Inclusive gateways.
For more information, see the Informatica 10.1 Developer Workflow Guide.
Metadata Manager
This section describes release tasks for Metadata Manager in version 10.1.
When you configure the resource, you must also enter the file path to the 10.0 Informatica Command Line
Utilities installation directory in the 10.0 Command Line Utilities Directory property.
For more information about Informatica Platform resources, see the "Data Integration Resources" chapter in
the Informatica 10.1 Metadata Manager Administrator Guide.
• NO_AUTH. The command line program accepts the digital certificate, even if the certificate has errors.
• FULL_AUTH. The command line program does not accept a security certificate that has errors.
The NO_AUTH setting is no longer valid. The command line programs now only accept security certificates
that do not contain errors.
If a secure connection is configured for the Metadata Manager web application, and you previously set the
[Link] property to NO_AUTH, you must now configure a truststore file. To configure
171
mmcmd or mmRepoCmd to use a truststore file, edit the [Link] file that is associated
with mmcmd or mmRepoCmd. Set the [Link] property to the path and file name of the truststore
file.
For more information about the [Link] files for mmcmd and mmRepoCmd, see the
"Metadata Manager Command Line Programs" chapter in the Informatica 10.1 Metadata Manager
Administrator Guide.
Security
This section describes release tasks for security features in version 10.1.
Permissions
After you upgrade to 10.1, the following Model repository objects have permission changes:
• Applications, mappings, and workflows. All users in the domain are granted all permissions.
• SQL data services and web services. Users with effective permissions are assigned direct permissions.
The changes affect the level of access that users and groups have to these objects.
After you upgrade, review and change the permissions on applications, mappings, workflows, SQL data
services, and web services to ensure that users have appropriate permissions on objects.
For more information, see the "Permissions" chapter in the Informatica 10.1 Security Guide.
173
Chapter 16
PowerExchange Adapters
For more information, see the Informatica PowerExchange for JD Edwards EnterpriseOne 10.0 User Guide.
For more information, see the Informatica PowerExchange for LDAP 10.0 User Guide.
For more information, see the Informatica PowerExchange for Microsoft Dynamics CRM 10.0 User Guide.
• You can use PowerExchange for Netezza to read data from and write data to Netezza databases. You can
process large volumes of data by using PowerExchange for Netezza.
• You can use the Secure Sockets Layer (SSL) protocol to configure a secure connection between Netezza
clients and the Netezza server.
For more information, see the Informatica PowerExchange for Netezza 10.0 User Guide.
174
PowerExchange for OData
Effective in version 10.0, you can use PowerExchange for OData to read data from an OData provider that
exposes data through an OData service. You can also run a profile against OData data objects.
For more information, see the Informatica PowerExchange for OData 10.0 User Guide.
Application Services
This section describes new application services features in version 10.0.
176
Disabling and Recycling Application Services
Effective in version 10.0, disabling and recycling application services have the following new features:
When you disable or recycle an application service from the Administrator tool, you can specify whether
the action is planned or unplanned. Planned and unplanned notes appear on the Command History and
Events panels in the Domain view on the Manage tab.
Comments
When you disable or recycle an application service from the Administrator tool, you can optionally enter
comments about the action. Comments appear on the Command History and Events panels in the
Domain view on the Manage tab.
For more information, see the Informatica 10.0 Application Service Guide.
Architecture
Effective in version 10.0, the Data Integration Service includes the following types of components:
Service components
Service components include modules that manage requests from client tools, the logical Data
Transformation Manager (LDTM) that optimizes and compiles jobs, and managers that manage
application deployment and caches. The service components run within the Data Integration Service
process. The Data Integration Service process must run on a node with the service role.
Compute component
The compute component of the Data Integration Service is the execution Data Transformation Manager
(DTM). The DTM extracts, transforms, and loads data to complete a data transformation job. The DTM
must run on a node with the compute role.
When the Data Integration Service runs on a single node, the service and compute components of the Data
Integration Service run on the same node. The node must have both the service and compute roles.
When the Data Integration Service runs on a grid, the service and compute components of the Data
Integration Service can run on the same node or on different nodes, based on how you configure the grid and
the node roles. When you configure a Data Integration Service grid to run jobs in separate remote processes,
the nodes in the grid can have a combination of the service only role, the compute only role, and both the
service and compute roles. Some nodes in the grid are dedicated to running the service processes while
other nodes are dedicated to running mappings.
For more information about Data Integration Service components, see the "Data Integration Service
Architecture" chapter in the Informatica 10.0 Application Service Guide.
For more information about the DTM resource allocation policy, see the "Data Integration Service
Architecture" chapter in the Informatica 10.0 Application Service Guide.
For more information about the data movement mode, see the "Data Integration Service Architecture" chapter
in the Informatica 10.0 Application Service Guide.
For more information about how to maximize parallelism, see the "Data Integration Service Management"
chapter in the Informatica 10.0 Application Service Guide.
Cache Directory
Configure multiple cache directories to optimize performance during cache partitioning for Aggregator,
Joiner, or Rank transformations.
Target Directory
Configure multiple target directories to optimize performance when multiple partitions write to a flat file
target.
Temporary Directories
Configure multiple temporary directories to optimize performance during cache partitioning for Sorter
transformations.
For more information about optimizing cache and target directories for partitioning, see the "Data Integration
Service Management" chapter in the Informatica 10.0 Application Service Guide.
You can integrate the Model repository with the following version control systems:
• Perforce
For more information, see the "Model Repository Service" chapter in the Informatica 10.0 Application Service
Guide.
System Services
Effective in version 10.0, the domain includes system services. A system service is an application service
that can have a single instance in the domain. System services are automatically created for you when you
create or upgrade the domain. You can enable, disable, and configure system services.
The following image shows the System Services folder in the Domain Navigator:
Email Service
The Email Service emails notifications for business glossaries and workflows. Enable the Email Service
to allow users to configure email notifications.
The Resource Manager Service manages computing resources in the domain and dispatches jobs to
achieve optimal performance and scalability. The Resource Manager Service collects information about
nodes with the compute role. The service matches job requirements with resource availability to identify
the best compute node to run the job.
Enable the Resource Manager Service when you configure a Data Integration Service grid to run jobs in
separate remote processes.
Scheduler Service
The Scheduler Service manages schedules for deployed mapping and workflow jobs in the domain.
Enable the Scheduler Service when you want to create schedules, assign jobs to them, and run
scheduled jobs.
For more information about system services, see the "System Services" chapter in the Informatica 10.0
Application Service Guide.
For more information, see the Informatica 10.0 Big Data Management Installation and Configuration Guide.
Hadoop Connection
Effective in version 10.0, you must configure a Hadoop connection when you run a mapping in the Hadoop
environment. You can edit the Hadoop connection to configure run-time properties for the Hadoop
environment. The run-time properties include properties for the Hive and Blaze engines.
The following image shows the Hadoop connection as a cluster type connection:
For more information, see the "Connections" chapter in the Informatica 10.0 Big Data Management User
Guide.
You can read data from and write data to Hortonworks HDP clusters that are deployed on Amazon EC2.
Hadoop distributions
You can connect to Hadoop clusters that run the following Hadoop distributions:
Hive on Tez
You can use Hive on Tez as the execution engine for Hadoop clusters that run Hortonworks HDP.
Kerberos Authentication
You can use Microsoft Active Directory as the key distribution center for Cloudera CDH and Hortonworks
HDP Hadoop clusters.
When you run a mapping in the Hadoop environment, you must configure a Hadoop connection for the
mapping. Validate the mapping to ensure that you can push the mapping logic to Hadoop. After you validate
a mapping for the Hadoop environment, you can run the mapping.
For more information, see the "Mappings in a Hadoop Environment" chapter in the Informatica 10.0 Big Data
Management User Guide.
Business Glossary
This section describes new Business Glossary features in version 10.0.
Approval Workflow
Effective in version 10.0, data stewards can publish Glossary assets after a voting process. The glossary
administrator configures the approval workflow for a glossary after which the data steward must publish or
reject all the assets in the glossary through a voting process. The glossary administrator can configure up to
two levels of approvals. The approvers can approve or reject the asset changes or abstain from voting. The
data steward publishes or rejects the asset based on the voting results.
Glossary assets that are published after an approval workflow have a new tab called Voting History in the
audit trail. This tab displays the details about the approval workflow.
For more information, see the "Approval Workflow" chapter in the Informatica 10.0 Business Glossary Guide.
For more information about asset attachments, see the "Glossary Content Management" chapter in the
Informatica 10.0 Business Glossary Guide. For more information about configuring the attachment directory,
see the "Analyst Service" chapter in the Informatica Application Service Guide.
For more information about the long string data type, see the "Glossary Content Management" chapter in the
Informatica 10.0 Business Glossary Guide.
For more information about rich text, see the "Glossary Content Management" chapter in the Informatica 10.0
Business Glossary Guide.
Optionally, you can choose to run the import task in the background. While the Analyst tool imports
glossaries in the background, you can perform other tasks. After the import is complete, the Analyst tool
sends you a notification.
In the final step of the import wizard, the Analyst tool now displays an enhanced summary and conflict
resolution options.
For more information about the import and export enhancements, see the "Glossary Administration" chapter
in the Informatica 10.0 Business Glossary Guide.
Email Notifications
Effective in version 10.0, you can choose to receive notifications through email. You continue to receive
notifications in the Analyst tool. You can configure the email notification settings in the Glossary Settings
workspace.
For more information about email notifications, see the "Finding Glossary Content" chapter in the Informatica
10.0 Business Glossary Guide.
Find Assets
You can search for assets that are displayed in the relationship view diagram.
For more information, see the "Finding Glossary Content" chapter in the Informatica 10.0 Business Glossary
Guide.
For more information, see the Informatica 10.0 Business Glossary Guide.
Glossary Security
Effective in version 10.0, the Analyst tool contains the following enhancements to the Glossary security:
Asset View
Effective in version 10.0, the asset view also displays the number of attachments and the name of the
glossary that contains the asset.
For more information, see the "Introduction to Business Glossary" chapter in the Informatica 10.0 Business
Glossary Guide.
For more information, see the "Glossary Administration" chapter in the Informatica 10.0 Business Glossary
Guide.
infacmd bg Command
The following table describes a new infacmd bg command:
Command Description
upgradeRepository Upgrades the Business Glossary data in the Model repository. Run this command after you
upgrade the domain.
Command Description
addParameterSetEntries Adds entries to a parameter set for a mapping or workflow that is deployed as an
application.
deleteParameterSetEntries Deletes entries from a parameter set for a mapping or workflow that is deployed as an
application. You can delete specific parameter set entries or you can delete all of the
parameter set entries.
listComputeOptions Lists Data Integration Service properties for a node with the compute role.
updateComputeOptions Updates Data Integration Service properties for a node with the compute role. Use the
command to override Data Integration Service properties for a specific compute node.
updateParameterSetEntries Updates entries in a parameter set for a mapping or workflow in an application. Enter
parameter name-value pairs to update, separated by spaces.
Command Description
- [Link]
- [Link]
- [Link]
- [Link]
The following email server options are removed:
- [Link]
- [Link]
The following options are removed for workflow operations:
- [Link]
- [Link]
- [Link]
- [Link]
infacmd es Commands
The new infacmd es program manages the Email Service.
Command Description
ListServiceOptions Returns a list of properties that are configured for the Email Service.
UpdateSMTPOptions Updates the email server properties for the Email Service.
The following table describes the obsolete infacmd hts commands and identifies the commands that you can
use to perform the corresponding actions in version 10.0:
Command Description
CreateDB Creates the database tables that store run-time metadata for Human tasks.
In version 10.0, all run-time metadata for workflows is stored in a common set of tables. Use infacmd
wfs CreateTables to create the workflow metadata tables.
DropDB Drops the database tables that store run-time metadata for Human tasks.
In version 10.0, all run-time metadata for workflows is stored in a common set of tables. Use infacmd
wfs DropTables to drop the workflow metadata tables.
Exit Stops a Human task and passes the records that the task identifies to the next stage in the workflow.
Use infacmd wfs BulkComplete to stop a Human task and to pass the records that the task identifies to
the next stage in the workflow.
Command Description
UpdateNodeRole Updates the role on a node in the domain. You can enable or disable the service role or the
compute role on a node.
Command Description
CreateConnection The connection options for the Hadoop connection are added.
DisableNodeResource, The ResourceCategory option is added. Use this option to specify that the resource is
EnableNodeResource, for the PowerCenter Integration Service.
ListNodeResources, and
RemoveNodeResource
GetLog The following service types are added for the ServiceType option:
- ES. Email Service
- SCH. Scheduler Service
- RMS. Resource Manager Service
GetNodeName The Outputfile option is added. Use this option with a file name and path to print the
node name in a file.
ListNodes The NodeRole option is added. Use this option to list nodes with a specified role.
ListServices The following service types are added for the ServiceType option:
- ES. Email Service
- SCH. Scheduler Service
- RMS. Resource Manager Service
PurgeMonitoring The NumDaysToRetainDetailedStat option is added. Use this option to configure the
number of days of detailed historical data that are retained in the Model repository
when the Data Integration Service purges statistics.
UpdateMonitoringOptions The DetailedStatisticsExpiryTime option is added. Use this option to configure when
the Data Integration Service purges detailed statistics from the Model repository.
The valid StatisticsExpiryTime values are changed. Minimum is 0. Maximum is 366.
Default is 180.
Command Description
CheckInObject Checks in a single object that is checked out. The object is checked in to the Model
repository.
ListFolders Lists the names of all of the folders in the project folder path that you specify.
Command Description
infacmd ms Commands
The following table describes new infacmd ms commands:
Command Description
Command Description
Command Description
ListComputeNodeAttributes Lists the compute node attributes that have been overridden for the specified node or
for all nodes.
SetComputeNodeAttributes Overrides the compute node attributes for the specified node.
Command Description
CreateSchedule Creates a schedule for one or more deployed mapping or workflow objects.
ListServiceOptions Returns a list of the properties that are configured for the Scheduler Service.
ListServiceProcessOptions Returns a list of the properties that are configured for a Scheduler Service process.
Command Description
BulkComplete Stops operations for a Human task and passes the records that the task identifies to
the next stage in the workflow.
CreateTables Creates the database tables that store run-time metadata for workflows.
DropTables Drops the database tables that store run-time metadata for workflows.
ListMappingPersistedOutputs Lists the state of each persisted Mapping output from a Mapping task instance that
the command specifies.
SetMappingPersistedOutputs Updates the persisted mapping outputs for a Mapping task instance that you specify
or sets the persisted mapping outputs to null values.
UpgradeParameterFile Upgrades a parameter file to verify that the parameter values in the file are valid in
the current release. When you run the command, you identify a parameter file to
upgrade and you specify a target file to contain the valid parameter values.
Command Description
abortWorkflow The RuntimeInstanceID option is renamed to InstanceId. The option identifies the workflow
instance to abort.
The Wait option is removed.
cancelWorkflow The RuntimeInstanceID option is renamed to InstanceId. The option identifies the workflow
instance to cancel.
The Wait option is removed.
recoverWorkflow The RuntimeInstanceID option is renamed to InstanceId. The option identifies the workflow
instance to recover.
The Wait option is removed.
infasetup Commands
The following table describes the new SystemLogDirectory option:
Command Description
DefineDomain The SystemLogDirectory option is added. Use this option to designate a custom location for
DefineGatewayNode logs.
DefineWorkerNode
UpdateGatewayNode
UpdateWorkerNode
session_property This massupdate command updates the value of any supported session or session config
property whether or not it is overridden.
Connectivity
This section describes new connectivity features in version 10.0.
You can select the connection provider that you want to use to connect to the Microsoft SQL Server
database. You can select either the ODBC or OLE DB connection type. You can also enable the Integration
Service to use the Data Source Name (DSN) for the connection. Additionally, you can use NTLM
authentication to authenticate the user who connects to Microsoft SQL Server.
For more information about configuring native connectivity, see the "Connecting to Databases from UNIX"
appendix in the Informatica 10.0 Installation and Configuration Guide.
Connection Switching
Effective in version 10.0, in the Developer tool, you can switch the connection of a relational data object or
customized data object to use a different relational database connection. After you switch the connection,
the Developer tool updates the connection details for the data object in all Read, Write, and Lookup
transformations that are based on the data object. You might want to switch the connection when you
migrate from one database to another and want to simultaneously update the existing mappings to use the
new connection.
• IBM DB2
• Microsoft SQL Server
• ODBC
• Oracle
The following image shows the dialog box that you use to switch a connection:
For more information, see the "Connections" chapter in the Informatica 10.0 Developer Tool Guide.
Connectivity 193
Data Types
This section describes new data type features in version 10.0.
For transformations that support the Decimal data type of precision up to 38 digits, when the target contains
a precision that is greater than 38 digits and has high precision enabled, the Data Integration Service stores
the result as a double.
For more information, see the "Data Type Reference" appendix in the Informatica 10.0 Developer Tool Guide.
The following table describes the post-upgrade behavior based on the applicable precision:
For more information, see the "Data Type Reference" appendix in the Informatica 10.0 Developer Tool Guide.
When you import the Timestamp with Time Zone data type into the Developer tool, the associated
transformation data type is timestampWithTZ. timestampWithTZ has a precision of 36 and a scale of 9.
Timestamp with Time Zone displacement value range is from -12:00 < UTC < +14:00.
For more information, see the "Data Type Reference" appendix in the Informatica 10.0 Developer Tool Guide.
When you import the Timestamp with Local Time Zone data type into the Developer tool, the associated
transformation data type is date/time. The Timestamp with Local Time Zone data type is implicitly supported
by most transformations as the functionality is equivalent to Timestamp.
Timestamp (6) with Local Time Zone has a precision of 26 and a scale of 6. It is mapped to the date/time
(29,9) transformation data type.
For more information, see the "Data Type Reference" appendix in the Informatica 10.0 Developer Tool Guide.
Documentation
This section describes new or updated guides with the Informatica documentation in version 10.0.
Effective in version 10.0, the Informatica Accessibility Guide contains accessibility information and
keyboard shortcuts for Informatica Administrator, Informatica Analyst, and Informatica Developer. The
Informatica Accessibility Guide is included in the online help for the Administrator tool, Analyst tool, and
Developer tool.
Effective in version 10.0, the Informatica Big Data Management Security Guide contains security
information for Big Data Management and Hadoop.
Previously, security for big data and Hadoop was documented in the Informatica Big Data Edition User
Guide.
Effective in version 10.0, the PowerCenter Data Profiling Guide is removed from the PowerCenter
documentation.
To learn more about profiling and discovery in Informatica, see the Informatica 10.0 Data Discovery
Guide.
Effective in version 10.0, the Informatica Big Data Edition User Guide is removed from the PowerCenter
documentation.
To learn more about big data in Informatica, see the Informatica 10.0 Big Data Management User Guide.
Effective in version 10.0, the Informatica Big Data Edition Installation and Configuration Guide is removed
from the PowerCenter documentation.
To learn more about big data installation and configuration in Informatica, see the Informatica 10.0 Big
Data Management Installation and Configuration Guide.
Documentation 195
Informatica Data Service Performance Tuning Guide
Effective in version 10.0, the Informatica Data Services Performance Tuning Guide is renamed to the
Informatica Performance Tuning Guide.
To learn more about performance tuning in Informatica, see the Informatica 10.0 Performance Tuning
Guide.
Domain
This section describes new domain features in version 10.0.
Nodes
Effective in version 10.0, each node has a role that defines the purpose of the node.
Service role
A node with the service role can run application services. When you enable the service role on a node,
the Service Manager supports application services configured to run on that node.
Compute role
A node with the compute role can perform computations requested by remote application services.
When you enable the compute role on a node, the Service Manager manages the containers on the node.
A container is an allocation of memory and CPU resources. An application service uses the container to
remotely perform computations on the node. For example, a Data Integration Service grid includes Node
1 with the service role and Node 2 with the compute role. The Data Integration Service process that runs
on Node 1 runs a mapping within a container on Node 2.
A node with both roles can run application services and locally perform computations for those services.
By default, each gateway and worker node has both the service and compute roles enabled. If a node is
assigned to a Data Integration Service grid that is configured to run jobs on remote nodes with the compute
role, you might want to update the node role. Enable only the service role to dedicate the node to running the
Data Integration Service process. Enable only the compute role to dedicate the node to running Data
Integration Service mappings.
For more information about node roles, see the "Nodes" chapter in the Informatica 10.0 Administrator Guide.
Informatica Administrator
This section describes new Administrator tool features in version 10.0.
Domain view
The Domain view is an overview of the status of the domain. You can view information about the domain,
view historical information about the domain, and perform common actions.
The following image shows the Domain view on the Manage tab:
• Domain. You can view properties, logs, and past events for the domain. You can also shut down the
domain.
• Contents panel. Displays services, nodes, and grids in the domain. You can view properties, events,
logs, and dependencies for objects. You can also enable, disable, and recycle services and shut down
nodes.
• Filter. You can filter domain contents by state or service type. You can also search domain objects, or
navigate domain objects by type, grid, or folder.
• Service State Summary. Doughnut chart that displays the number and states of services in the
domain.
• Resource usage panels. Bar charts that compare memory and CPU usage for objects in the domain to
memory and CPU usage for all processes on the machine.
• Command History. Displays service lifecycle commands that users issue from the Administrator tool.
Lifecycle commands include enable, disable, and recycle.
• History view. Displays historical status, resource consumption, and events in the domain for a
selected time range.
Navigator
You can search for and filter nodes, application services, and grids in the Domain Navigator on the
Services and Nodes view. You can search for an object by name. Or, you can filter the list of objects that
appear in the Navigator by object type.
Schedules view
Dependency Graph
Effective in version 10.0, the Dependency graph is accessed from the Domain view on the Manage tab.
Previously, the Dependency graph was accessed from the Services and Nodes view on the Domain tab.
The Dependency graph has a new user interface and additional functionality.
Monitoring
Effective in version 10.0, the Monitoring tab in the Administrator tool is renamed the Monitor tab.
• Summary Statistics view. Displays resource usage, object distribution, and object states for a
selected time range.
The following image shows the Summary Statistics view:
• Execution Statistics view. Contains the Navigator and views that were on the Monitoring tab in
previous versions.
You can view statistics about ad hoc mapping jobs, deployed mapping jobs, and mapping objects in a
workflow.
• Summary Statistics view. Displays throughput and resource usage information for the source and
target.
The following image shows the Summary Statistics view for a mapping job:
• Detailed Statistics view. Appears for jobs that run in separate local processes for longer than one
minute. Displays graphs of throughput and resource usage information for the source and target.
The following image shows the Detailed Statistics view for a mapping job in a workflow:
Configuration
Monitoring Configuration, formerly Global Settings, has the new optionPreserve Detailed Historical Data.
Use this option to configure when expired per-minute statistics can be purged from the Model repository.
Default is 14. Minimum is 1. Maximum is 14.
For more information, see the "Monitoring" chapter in the Informatica 10.0 Administrator Guide.
Asset Versioning
Effective in version 10.0, when the Model repository is integrated with a version control system, the version
control system protects assets from being overwritten by other members of the development team. You can
check assets out and in, and undo the checkout of assets.
For more information, see the "Model Repository" chapter in the Informatica 10.0 Analyst Tool Guide.
Profiles
This section describes new Analyst tool features for profiles and profile results.
Column Profile
Effective in version 10.0, you can right-click the data object in the Library workspace to create a column
profile. The data object and folder options are updated automatically in the profile wizard.
For more information about column profile, see the "Column Profiles in Informatica Analyst" chapter in the
Informatica 10.0 Data Discovery Guide.
• View profile results in summary view and detailed view. The summary view provides a high-level overview
of the profile results in a grid format. The detailed view displays column-specific information in detail.
• View outliers in the summary view and detailed view of profile results. An outlier is a pattern, value, or
frequency for a column that does not fall within an expected range of values.
• View profile results for the latest profile run, historical profile run, and consolidated profile run. You can
view the profile results for any historical profile run. When you run the consolidated profile run, you can
view the latest results for each column in the profile.
• Compare profile results for two profile runs, and view the profile results in summary view and detailed
view.
• View profile results for a profile with JSON or XML data sources.
• Add business terms, tags, and comments to a profile and columns in the profile.
For more information about column profile results, see the "Column Profile Results in Informatica Analyst"
chapter in the Informatica 10.0 Data Discovery Guide.
For more information, see the Informatica 10.0 Data Discovery Guide.
JDBC Connectivity
Effective in version 10.0, you can specify a JDBC connection as a profiling warehouse connection for IBM
DB2 UDB, Microsoft SQL Server, and Oracle database types. You can create column profiles, rule profiles,
domain discovery, and scorecards with a JDBC connection as a profiling warehouse connection.
For more information, see the Informatica 10.0 Installation and Configuration Guide.
For more information about object versioning, see the "Column Profiles in Informatica Analyst" chapter in the
Informatica 10.0 Data Discovery Guide.
For more information, see the Informatica 10.0 Data Discovery Guide.
Scorecard Filter
Effective in version 10.0, you can create and apply a filter on the metrics of a scorecard.
For more information about scorecard filter, see the "Scorecards in Informatica Analyst" chapter in the
Informatica 10.0 Data Discovery Guide.
Informatica Developer
This section describes new Informatica Developer features in version 10.0.
For more information, see the "Physical Data Objects" chapter in the Informatica Developer Tool Guide.
• Read data from sources where the order of the columns in the source is different from that of the
columns in the physical data object.
• Read data from additional columns in sources that are not present in the physical data object.
• Ignore data for columns that are present in the physical data object but not in the source.
For relational data sources, the Data Integration Service directly fetches the metadata changes from the
database schema.
For flat file data sources, you must configure the flat file data object for the Data Integration Service to fetch
the metadata changes from the data file header, a control file, or automatically from the columns in the data
source. Configure the Generate Run-time Column Names property on the Advanced tab of the flat file data
object.
For more information, see the "Dynamic Mappings" chapter in the Informatica 10.0 Developer Mapping Guide.
• Normalizer transformation
• Sequence Generator transformation
• Update Strategy transformation
For more information, see the Informatica 10.0 Developer Mapping Guide.
Monitoring Tool
Effective in version 10.0, the Monitoring tool has the following new features:
Contains the Navigator and views that were in the Monitoring tool in version 9.6.1.
Displays resource usage, object distribution, and object states for a selected time range.
You can view additional information about ad hoc mapping jobs, deployed mapping jobs, and mapping
objects in workflows in the Execution Statistics view. When you select one of these objects in the
contents panel, the details panel displays the following new views:
• Summary Statistics view. Displays throughput and resource usage information for the source and
target.
The following image shows the Summary Statistics view for a mapping job:
For more information, see the "Viewing Data" chapter in the Informatica 10.0 Developer Tool Guide.
Object Versioning
Effective in version 10.0, when the Model repository is integrated with a version control system, the version
control system protects objects from being overwritten by other members of the development team. You can
check objects out and in, undo the checkout of objects, and view and restore historical versions of objects.
The Developer tool depicts a versioned Model repository with a white icon decorated with a green check
mark.
The following image shows two connected repositories: MRS1, which has been integrated with a version
control system, and MRS2, which has not:
For more information, see the "Model Repository" chapter in the Informatica 10.0 Developer Tool Guide.
For more information, see the "Application Deployment" chapter in the Informatica 10.0 Developer Tool Guide.
Profiles
This section describes new Developer tool features for profiles and profile results.
• Flat File. In this method, you need to create a text file, and add the JSON or XML file source location into
the file. Create a flat file data object with the text file. Create a column profile on the flat file data object.
• Complex file reader. In this method, you create a complex file data object on the JSON or XML source file,
and create a column profile with the complex file data object.
• JSON or XML file in HDFS. In this method, you need to create a connection with HDFS, and create a
complex file data object on the JSON or XML file in HDFS. You can create a column profile with the
complex file data object.
• JSON or XML files in a folder. In this method, you need to consolidate all the JSON or XML files into a
folder. Create a connection with HDFS, and create a complex file data object with the folder. You can
create a column profile on the complex file data object.
For more information about column profiles with JSON and XML data sources, see the "Data Object Profiles"
chapter in the Informatica 10.0 Data Discovery Guide.
For more information, see the Informatica 10.0 Data Discovery Guide.
For more information about curation, see the "Enterprise Discovery Results" chapter in the Informatica 10.0
Data Discovery Guide.
JDBC Connectivity
Effective in version 10.0, you can specify a JDBC connection as a profiling warehouse connection for IBM
DB2 UDB, Microsoft SQL Server, and Oracle database types. You can create column profiles, rule profiles,
domain discovery, and scorecards with a JDBC connection.
For more information, see the Informatica 10.0 Installation and Configuration Guide.
Object Versioning
Effective in version 10.0, when the Model repository is integrated with a version control system, the version
control system protects objects from being overwritten by other members of the development team. You can
check profiles out and in, undo the checkout of profiles, and view and restore historical versions of profiles.
For more information about object versioning, see the "Informatica Developer Profiles" chapter in the
Informatica 10.0 Data Discovery Guide.
You can map the native data types to Java data types. When you map the native data type, select the
best Java data type to read from the data source and select the best native data type to write to the
target database or application.
You can define multiple native metadata definitions for an adapter. For example, you can create different
native metadata objects for tables, views, and synonyms in a relational data source.
You can define Sort statement support for an adapter to retrieve data from the data source in a specific
order. You can define whether the adapter supports Select statement when the adapter reads from the
data source. You can use the Informatica Connector Toolkit to define the following Select statements for
an adapter:
• Select All
• Select Any
• Select Distinct
• Select First Row
• Select Last Row
Partition
You can specify the partition type and implement the partition logic to use when the adapter reads or
writes data.
You can specify one of the following partition types or all the partition types for an adapter:
• Dynamic. The Data Integration Service determines the number of partitions at run time based on the
partition information from the data source.
• Static. The Data Integration Service determines partitioning logic based on the partition information
that the user specifies, such as the number of partitions or key range partitioning.
Parameterization
You can specify whether the read and write capability attributes of a native metadata object support full
parameterization or partial parameterization. The read and write capability attributes of the native
metadata object can be assigned values or parameters at run time.
You can implement pre and post tasks that can be run before or after a read or write operation. For
example, you can implement the functionality to truncate a target table before a write operation.
Messages
You can create messages to handle exceptions that occur during the design time or run time of the
adapter. You can use the Message wizard to add, edit, or delete messages. You can localize the
message files if required.
You can implement the run-time behavior of the adapter in C. You can write code to define how the
adapter reads from and writes to the data source in C.
Reject files
You can implement support for reject files to handle data rejected by the target.
For more information, see the Informatica Development Platform 10.0 Informatica Connector Toolkit
Developer Guide.
Mappings
This section describes new mapping features in version 10.0.
Informatica Mappings
This section describes new mapping features in version 10.0.
Dynamic Mappings
Effective in version 10.0, you can configure dynamic mappings to change sources, targets, and
transformation logic at run time based on parameters and rules that you define. You can determine which
ports a transformation receives, which ports to use in the transformation logic, and which links to establish
between transformation groups. Dynamic mappings enable you to manage frequent metadata changes to the
data sources or to reuse the mapping logic for different data sources with different schemas.
Dynamic mappings include the following features that you can configure:
• Dynamic sources allow changes to the metadata in flat file and relational sources at run time. When the
metadata in a flat file or relational source changes, Read and Lookup transformations can get data object
columns directly from the dynamic sources at run time.
• Transformations can include dynamic ports, which receive one or more columns that can change based
on the rules that you define. You can define rules to include or exclude columns in a dynamic port.
The following transformations can include dynamic ports:
- Aggregator
- Expression
- Filter
- Joiner
- Lookup
- Rank
- Router
- Sequence Generator
- Sorter
- Update Strategy
• You can define a port selector in the Joiner transformation, in the Lookup transformation, and in the
Expression transformation. A port selector is an ordered list of ports that you can reference in the
Mappings 207
transformation logic. Configure a port selector to filter the ports that flow into the transformation and to
reference the ports in a join condition, a lookup condition, or a dynamic expression.
• You can define a dynamic expression in an Expression transformation. A dynamic expression returns
results to a dynamic output port. You can reference a port selector or a dynamic port in a dynamic
expression. When you reference a dynamic port or a port selector, the dynamic expression runs one time
for each port in the dynamic port or the port selector. The Expression transformation generates a separate
output port for each expression instance.
• Dynamic targets allow you to define the columns for flat file and relational targets at run time. Write
transformations can generate columns for the targets at run time based on an associated data object or
the mapping flow. Write transformations that represent relational targets can also create or replace tables
at run time.
• Transformations can have links between groups that determine which ports to connect at run time based
on a policy or a parameter.
• Sources and targets, rules for ports, and transformation properties can change at run time based on
parameters.
For more information about dynamic mappings, see the "Dynamic Mappings" chapter in the Informatica 10.0
Developer Mapping Guide.
Mapping Outputs
Effective in version 10.0, you can create mapping outputs that return aggregated values from the mapping
run. Mapping outputs are the result of aggregating a field value or an expression from each row that a
mapping processes.
For example, you can configure a mapping output to summarize the total amount of an order field from the
source rows that the transformation receives. You can persist a mapping output value in the repository. You
can assign a persisted mapping output value to the Mapping task input parameter. You can also assign
mapping outputs to workflow variables.
Create a mapping output in the mapping Outputs view. Define the expression to aggregate in an Expression
transformation in the mapping.
For more information, see the Informatica 10.0 Developer Mapping Guide.
For more information, see the Mapping Tasks chapter in the Informatica 10.0 Developer Workflow Guide.
Optimization Methods
Effective in version 10.0, Informatica has the following new features for optimization methods:
The Data Integration Service can apply the global predicate optimization method. When the Data
Integration Service applies the global predicate optimization method, it splits, moves, removes, or
simplifies the filters in a mapping. The Data Integration Service filters data as close to the source as
possible in the pipeline. It also infers the predicate expressions that a mapping generates.
For more information, see the "Mapping Optimization" chapter in the Informatica 10.0 Performance
Tuning Guide.
You must select a pushdown type to push transformation logic to the source database. You can choose
to push down none of the transformation logic, partial transformation logic, or full transformation logic
to the source database. You can also view the mapping optimization plan for the pushdown type.
If the mapping has an Update Strategy transformation, you must determine pushdown compatibility for
the mapping before you configure pushdown optimization.
For more information, see the "Pushdown Optimization" chapter in the Informatica 10.0 Developer
Mapping Guide.
If a mapping requires data in two different sized tables in different databases to be joined, the Data
Integration Service can apply the dataship-join optimization method.
For more information, see the "Mapping Optimization" chapter in the Informatica 10.0 Performance
Tuning Guide.
You can view how optimization methods affect mapping performance in a mapping optimization plan.
For more information, see the "Mapping Optimization" chapter in the Informatica 10.0 Performance
Tuning Guide.
Parameters
Effective in version 10.0, Informatica has the following new features for parameters:
Parameter usage
You can use parameters to represent additional properties such as connections, SQL statements, sort
and group-by port lists, expression variables, and run time environment.
Parameter types
You can use the following parameter types for dynamic mappings: expression, input link set, port, port
list, resource, and sort list.
You can bind mapping parameters to mapplet parameters or to transformation parameters in the
Instance Value column of a Parameters tab. You can also bind mapplet parameters to transformation
parameters.
Mappings 209
When you bind a parameter to another parameter, the parameter overrides the other parameter at run
time. You can create a mapping or a mapplet parameter from an existing parameter and bind the
parameters in one step. Click the Expose as Mapping Parameter option or the Expose as Mapplet
Parameter option for the parameter you want to override.
You can bind parameters from a mapping to parameters in a Read or Write logical data object mapping.
Parameter sets
You can define a parameter set for a workflow or mapping. A parameter set is an object in the Model
repository that contains a set of parameters and parameter values to use at run time. You use a
parameter set with a mapping, Mapping task, or workflow. You can add one or more parameter sets to
an application when you deploy the application. You can add a parameter set to multiple applications
and deploy them.
You can set the run-time environment with a parameter. Configure a string parameter at the mapping
level. Set the default value to Native or Hadoop. When you select the run-time environment for the
mapping, click Assign Parameter and select the parameter that you configured.
For more information about parameters, see the Mapping Parameters chapter in the Informatica 10.0
Developer Mapping Guide.
Partitioned Mappings
Effective in version 10.0, Informatica has the following new features for partitioned mappings:
Partitioned transformations
Additional transformations support partitioning. When a mapping enabled for partitioning contains the
following transformations, the Data Integration Service can use multiple threads to transform the data:
• Address Validator
• Case Converter
• Classifier
• Comparison
• Data Masking
• Data Processor
• Decision
• Key Generator
• Labeler
• Match, when configured for identity match analysis
• Merge
• Normalizer
• Parser
• Sequence Generator
• Sorter
• Standardizer
• Weighted Average
For an Aggregator, Joiner, or Rank transformation, you can configure multiple cache directories to
optimize performance during cache partitioning for the transformation. You can use the default CacheDir
system parameter value if an administrator configured multiple cache directories for the Data Integration
Service. Or, you can override the default CacheDir system parameter value to configure multiple cache
directories specific to the transformation.
For a Sorter transformation, you can configure multiple work directories to optimize performance during
cache partitioning for the transformation. You can use the default TempDir system parameter value if an
administrator configured multiple temporary directories for the Data Integration Service. Or, you can
override the default TempDir system parameter value to configure multiple directories specific to the
transformation.
The Data Integration Service can create partitions for a mapping that establishes a sort order. You can
establish sort order in a mapping with a sorted flat file source, a sorted relational source, or a Sorter
transformation. When the Data Integration Service adds a partition point to a mapping, it might
redistribute data and lose the order established earlier in the mapping. To maintain order in a partitioned
mapping, you must specify that Expression, Java, Sequence Generator, SQL, and Write transformations
maintain the row order in the transformation advanced properties.
To optimize performance when multiple threads write to a flat file target, you can configure multiple
output file directories for a flat file data object. You can use the default TargetDir system parameter
value if an administrator has configured multiple target directories for the Data Integration Service. Or,
you can override the default TargetDir system parameter value to configure multiple output file
directories specific to the flat file data object.
If you override the maximum parallelism for a mapping, you can define a suggested parallelism value for
a specific transformation. The Data Integration Service uses the suggested parallelism value for the
number of threads for that transformation pipeline stage as long as the transformation can be
partitioned. You can define a suggested parallelism value that is less than the maximum parallelism
value defined for the mapping or the Data Integration Service. You might want to define a suggested
parallelism value to optimize performance for a transformation that contains many ports or performs
complicated calculations.
For more information about partitioned mappings, see the "Partitioned Mappings" chapter in the Informatica
10.0 Developer Mapping Guide.
Run-time Properties
Effective in version 10.0, you can configure the following run-time properties for a mapping:
Stop on Errors
Stops the mapping if a nonfatal error occurs in the reader, writer, or transformation threads. Default is
disabled.
The number of rows to use as a basis for a commit. The Data Integration Service commits data based on
the number of target rows that it processes and the constraints on the target table.
For more information, see the Informatica 10.0 Developer Mapping Guide.
Mappings 211
Target Load Order Constraints
Effective in version 10.0, you can configure constraints to control the order in which rows are loaded and
committed across target instances in a mapping. Define constraints on the Load Order tab of the mapping
Properties view. Each constraint consists of a primary target name and a secondary target name to restrict
the load order.
For more information, see the Informatica 10.0 Developer Mapping Guide.
Metadata Manager
This section describes new Metadata Manager features in version 10.0.
Tableau Resources
Effective in version 10.0, you can create and configure a Tableau resource to extract metadata from Tableau
Server.
For more information about creating and configuring Tableau resources, see the "Business Intelligence
Resources" chapter in the Informatica 10.0 Metadata Manager Administrator Guide.
For more information about supported metadata source versions, see the PCAE Metadata Manager XConnect
Support Product Availability Matrix on Informatica Network:
[Link]
When you view a data lineage diagram that includes a PowerCenter mapping, Metadata Manager
displays a summarized view of the mapping by default. The summary view displays mapping inputs and
outputs in the data lineage diagram but hides the transformation logic. The summary view reduces the
complexity of the data lineage diagram. It also reduces the amount of time it takes for Metadata
Manager to generate the data lineage diagram.
To view all of the transformation logic in a mapping, click Switch to Detail on the data lineage diagram
toolbar. The following image shows the Switch to Detail button:
To switch from the detail view to back to the summary view, refresh the diagram.
Filter objects
You can filter the objects that appear in a data lineage diagram. You can filter individual objects or all
objects of a particular class. For example, you might want to remove all business terms from a data
lineage diagram. You can remove any filter that you apply.
Improved performance
Metadata Manager uses a file-based graph database for storing and retrieving data lineage linking
information. As a result, Metadata Manager generates data lineage diagrams more quickly than it did in
previous versions.
If Metadata Manager takes a long time to generate a data lineage diagram, you can cancel creation of
the diagram.
For more information about data lineage diagrams, see the "Working with Data Lineage" chapter in the
Informatica 10.0 Metadata Manager User Guide. For more information about configuring the Metadata
Manager lineage graph location, see the "Metadata Manager Service" chapter in the Informatica 10.0
Application Service Guide.
For more information about the metadata catalog views, see the "Viewing Metadata" chapter in the
Informatica 10.0 Metadata Manager User Guide.
For more information about Impala queries in Cloudera Navigator resources, see the "Database Management
Resources" chapter in the Informatica 10.0 Metadata Manager Administrator Guide.
If an Informatica Platform 10.x application includes a mapping that uses parameters, you can configure
Metadata Manager to use the parameter values from a parameter set. You assign a parameter set to a
mapping when you create an Informatica Platform resource. Metadata Manager uses the parameter values to
display the mapping objects and to display data lineage.
For more information about Informatica Platform resources, see the "Data Integration Resources" chapter in
the Informatica 10.0 Metadata Manager Administrator Guide.
Recent History
Effective in version 10.0, Metadata Manager maintains a history of the objects that you view in the metadata
catalog. Use the recent history to quickly return to an object that you previously viewed. Metadata Manager
clears the recent history when you log out.
For more information, see the "Viewing Metadata" chapter in the Informatica 10.0 Metadata Manager User
Guide.
For more information, see the "Viewing Metadata" chapter in the Informatica 10.0 Metadata Manager User
Guide.
The impact summary lists the Session task instance because it can affect the data flow. A Session task
instance can override source or target connection information. It can also contain an SQL query that
overrides the default query used to extract data from the source.
For more information about the impact summary, see the "Viewing Metadata" chapter in the Informatica 10.0
Metadata Manager User Guide.
The following table describes new Metadata Manager application properties in [Link]:
Property Description
[Link] Maximum number of errors that the Metadata Manager Service can
encounter before the custom resource load fails.
[Link] Number of errors that the Metadata Manager Service writes to the in
memory cache and to the [Link] file in one batch when you load a custom
resource.
Property Description
For more information about the [Link] file, see the "Metadata Manager Properties Files" appendix in
the Informatica 10.0 Metadata Manager Administrator Guide.
High Availability
Effective in version 10.0, you can enable the PowerCenter Integration Service and PowerCenter client to read
from and write to a Hadoop cluster that uses a highly available NameNode.
For more information, see the "PowerExchange for Hadoop Configuration" chapter in the Informatica 10.0
PowerExchange for Hadoop User Guide for PowerCenter
PowerExchange Adapters
This section describes new PowerExchange adapter features in version 10.0.
For more information, see the Informatica PowerExchange for DataSift 10.0 User Guide.
For more information, see the Informatica PowerExchange for Facebook 10.0 User Guide.
• You can configure dynamic partitioning for Greenplum data objects. You can configure the partition
information so that the Data Integration Service determines the number of partitions to create at run time.
• You can parameterize Greenplum data object operation properties to override the write data object
operation properties during run time.
• You can use the Max_Line_Length integer to specify the maximum length of a line in the XML
transformation data that is passed to gpload.
For more information, see the Informatica PowerExchange for Greenplum 10.0 User Guide.
For more information, see the Informatica PowerExchange for HBase 10.0 User Guide.
For more information, see the Informatica PowerExchange for HDFS 10.0 User Guide.
PowerCenter 215
PowerExchange for LinkedIn
Effective in version 10.0, you can parameterize the LinkedIn data object read operation properties.
For more information, see the Informatica PowerExchange for LinkedIn 10.0 User Guide.
• You can use the Developer tool to create an SAP Table data object and a data object read operation. You
can then add the read operation as a source or lookup in a mapping, and run the mapping to read or look
up data from SAP tables.
• When you read data from SAP tables, you can configure key range partitioning. You can also use
parameters to change the connection and Table data object read operation properties at run time.
• You can run a profile against SAP Table data objects.
• When you create an SQL Data Service, you can add an SAP Table data object read operation as a virtual
table.
• You can read data from the SAP BW system through an open hub destination or InfoSpoke.
• When you read data from the SAP BW system, you can configure dynamic or fixed partitioning. You can
also use parameters to change the connection and BW OHS Extract data object read operation properties
at run time.
• You can write data to the SAP BW system. You can use a 3.x data source or a 7.x data source to write
data to the SAP BW system.
• When you write data to the SAP BW system, you can configure dynamic partitioning. You can also use
parameters to change the connection and BW Load data object write operation properties at run time.
• You can create an SAP connection in the Administrator tool.
• When you use the Developer tool to read data from or write data to SAP BW, you can create an SAP BW
Service in the Administrator tool.
For more information, see the Informatica PowerExchange for SAP NetWeaver 10.0 User Guide.
• You can use PowerExchange for Teradata Parallel Transporter API to read large volumes of data from
Teradata tables.
• You can use the Update system operator to perform insert, update, upsert, and delete operations against
Teradata database tables.
• You can use the Secure Sockets Layer (SSL) protocol to configure a secure connection between the
Developer tool and the Teradata database.
• You can configure dynamic partitioning for Teradata Parallel Transporter API data objects. You can
configure the partition information so that the Data Integration Service determines the number of
partitions to create at run time.
• You can parameterize Teradata data object operation properties to override the read and write data object
operation properties during run time.
For more information, see the Informatica PowerExchange for Teradata Parallel Transporter API 10.0 User
Guide.
For more information, see the Informatica PowerExchange for Twitter 10.0 User Guide.
For more information, see the Informatica PowerExchange for Web Content-Kapow Katalyst 10.0 User Guide.
Reference Data
This section describes new reference data features in version 10.0.
Classifier Models
Effective in version 10.0, you can perform the following actions in a classifier model in the Developer tool:
• Import reference data values and label values to a classifier model from a data source.
• Select the configurable options from a ribbon in the classifier model. For example, select the Manage
Labels option to access the options to add, delete, or update the label values in a classifier model.
• Use wildcard characters in the search filter in a classifier model.
• Add a single row of data to a classifier model.
• Apply a label value to multiple rows of classifier model data in a single operation.
For more information, see the "Classifier Models" chapter in the Informatica 10.0 Reference Data Guide.
Probabilistic Models
Effective in version 10.0, you can perform the following actions in a probabilistic model in the Developer tool:
For more information, see the "Probabilistic Models" chapter in the Informatica 10.0 Reference Data Guide.
Linked Assets
Effective in version 10.0, the Design workspace in the Analyst tool displays a hyperlink to an asset that you
link to the rule specification. For example, if you use another rule asset in the rule specification, the
workspace displays a link to the rule asset. The Design workspace also displays a hyperlink to any rule that
you generate from the rule specification.
For more information, see the "Rule Specification Configuration" chapter of the Informatica 10.0 Rule
Specification Guide.
Mapplet Rules
Effective in version 10.0, you can use mapplet rules in the following ways:
• You can configure a rule specification that is valid during a time period that you define. You specify the
dates and times that indicate the start and the end of the time period. The time period also applies to any
mapplet rule that you compile from the rule specification. If you run a mapping that reads the mapplet rule
outside the time period, the mapping fails.
For more information, see the "Rule Specification Configuration" chapter of the Informatica 10.0 Rule
Specification Guide.
• You can add a mapplet rule to a condition and an action in a rule statement. Connect an input from the
rule specification to an input port on the mapplet rule. Or, use a constant value as an input to the mapplet
rule. Select an output port from the mapplet rule as output from the condition or the action.
For more information, see the "Rule Specification Configuration" chapter of the Informatica 10.0 Rule
Specification Guide.
Rule Statements
Effective in version 10.0, you can perform the following operations in a rule statement:
• You can move or copy a rule statement within a rule set, and you can move or copy a rule statement to
another rule set. You can move or copy a rule statement to a rule set in another rule specification. If you
move or copy a rule statement to another rule specification, the operation moves or copies the inputs that
the rule statement uses. The operation also moves or copies any test data that you entered and saved to
test the rule statement.
• You can move or copy a rule set to another location in the rule specification and to another rule
specification. If you move or copy a rule set to another rule specification, the operation moves or copies
the inputs and the test data that the rule set uses.
• You can move or copy test data from a rule specification to another rule specification.
• You can select the CONTAINS operator when you configure a condition in a rule statement. Use the
operator to determine the following information about the data values in an input column:
- Determine if an input column contains a data value that you enter.
- Determine if an input column contains a data value that appears on the same row in another input
column.
• You can configure a rule statement to search for an input value in a list of values that you enter.
• A rule set includes a predefined rule statement that specifies an action to perform when the preceding
rule statements generate no data. By default, the rule statement specifies that the rule set performs no
action. You can update the action in the rule statement.
• When you select the Inputs view for a rule set, the workspace hides any input that the rule set does not
contain.
• You can drag the rule specification in the workspace canvas.
• You can use the mouse wheel to zoom in and zoom out of the rule specification.
• You can expand and collapse the rule specification tree structure to show or hide different parts of the
rule specification.
• You can add a text description to an input.
• A rule set that reads the output of a child rule set displays the child rule set name in the list of inputs.
• A rule set that is not valid appears in a different color to a valid rule set.
• Some configurable options have new names.
For more information, see the Informatica 10.0 Rule Specification Guide.
Version Control
Effective in version 10.0, you can work with rule specifications in a versioned Model repository. If you open a
rule specification from a Model repository that uses version control, the Analyst tool applies the version
control properties to the rule specification. Use the Edit option in the Design workspace to check out a rule
specification from the repository. Use the Save and Finish option in the workspace to check in the rule
specification. You can also undo a checkout operation.
You can view an earlier version of the rule specification and revert to an earlier version in edit mode and in
read-only mode. When you view an older version of a rule specification in read-only mode, you can perform all
of the read-only operations that apply to the current version of the rule specification. You can view and
validate a rule specification in read-only mode. You can test a rule specification in read-only mode if the rule
specification contains test data.
For more information, see the "Model Repository" chapter in the Informatica 10.0 Analyst Guide.
Security
This section describes new security features in version 10.0.
Groups
Effective in version 10.0, Informatica includes a default group named Operator. Use the Operator group to
manage multiple users who are assigned the Operator role.
Privileges
Effective in version 10.0, Informatica includes the following new privileges:
Security 219
Model Repository Service privilege
The Manage Team-based Development privilege allows Model repository administrators to perform
actions related to object lock management and versioned object management.
The Scheduler privilege group determines the actions that users can perform on schedules and
scheduled jobs.
For more information, see the "Command Line Privileges and Permissions" appendix in the Informatica 10.0
Security Guide.
Roles
Effective in version 10.0, Informatica includes a custom role named Operator. The Operator role includes
privileges for managing, scheduling, and monitoring application services.
Informatica Functions
This section describes new features of Informatica functions in version 10.0.
CaseFlag
Effective in version 10.0, the CaseFlag option does not support NULL values for the following functions:
GREATEST, LEAST, IN, and INDEXOF.
For more information, see the "Functions" chapter in the Informatica 10.0 Developer Transformation Language
Reference.
TO_DECIMAL38 Function
Effective in version 10.0, you can use the TO_DECIMAL38 function to convert a string or numeric value to a
decimal value. The function returns a decimal value of precision and scale between 0 and 38, inclusive.
For more information, see the Informatica 10.0 Transformation Language Reference.
Transformations
This section describes new transformation features in version 10.0.
For more information, see the "Mapping Parameters" chapter of the Informatica 10.0 Developer Mapping
Guide.
The Library object contains many objects and components, such as Parsers, Serializers, and XML schemas,
preset to transform the industry standard input and specific application messages into XML or other output.
Some libraries contain additional objects for message validation, acknowledgments, and diagnostic displays.
You can also customize the properties and validation settings of the Library object.
You can create Library objects for the DTCC-NTCC, EDIFACT, EDI-X12, HIPAA, HL7, and SWIFT libraries.
For more information, see the Informatica Data Transformation 10.0 User Guide and the Informatica Data
Transformation 10.0 Libraries Guide.
For more information, see the Informatica Data Transformation 10.0 User Guide.
For more information about custom data types, see the Informatica Developer 10.0 User Guide.
Transformations 221
RunMapplet Statement for XMap
You can define a RunMapplet mapping statement to call a mapplet from an XMap in a Data Processor
transformation. One or more MappletInput and MappletOutput statements can be nested under the
RunMapplet statement. Values are mapped to the mapplet input ports in the same order that they are listed in
the MappletInput statements. The values in the mapplet outlet ports are mapped to the MappletOutput
statement in the same order that they are listed in the mapplet ports.
For more information, see the Informatica Data Transformation 10.0 User Guide.
For more information, see the Informatica Data Transformation 10.0 User Guide.
Decision Transformation
Effective in version 10.0, you can use parameters to specify input values in a Decision transformation script.
For more information, see the "Mapping Parameters" chapter of the Informatica 10.0 Developer Mapping
Guide.
For more information, see the "Mapping Parameters" chapter of the Informatica 10.0 Developer Mapping
Guide.
Expression Transformation
This section describes the new features in the Expression transformation.
Dynamic Expressions
Effective in version 10.0, you can create an expression in a dynamic output port. When you create an
expression in a dynamic port, the expression is a dynamic expression. A dynamic expression might
generate more than one output port when the expression contains a port selector or a dynamic port.
When the dynamic expression runs against multiple ports, the expression returns an output value for
each port.
For more information about dynamic expressions, see the Expression Transformations chapter in the
Informatica 10.0 Developer Transformation Guide.
Mapping Outputs
Effective in version 10.0, you can configure mapping outputs. A mapping output is a single value that is
the result of aggregating a field or expression from each row that the mapping processes. For example,
a mapping output can summarize the total amount of an order field from all the source rows that the
transformation receives. A mapping output expression is a field value or an expression to aggregate
from the rows that the Expression transformation receives. You must define a mapping output in the
mapping Properties view, before you can create the corresponding expression in the Expression
transformation.
For more information about mapping outputs, see the Mapping Outputs chapter in the Informatica 10.0
Developer Mapping Guide.
Effective in version 10.0, you can test expressions that you configure in the Expression Editor. When you
test an expression, you enter sample data and then evaluate the expression.
You can test expressions when you configure expressions in the following ways:
The following image shows the results of an expression that concatenates a sample first name and last
name:
For more information about testing expressions, see the "Expression Transformation" chapter in the
Informatica 10.0 Developer Transformation Guide.
The Hierarchical to Relational transformation is an optimized transformation introduced in version 10.0 that
converts hierarchical input to relational output.
For more information, see the Informatica 10.0 Developer Transformation Guide.
Match Transformation
Match Type Options in Identity Match Analysis
Effective in version 10.0, you can select the following options when you configure the Match transformation
to read a persistent store of identity index data:
The transformation deletes rows from the index tables if the rows share sequence identifiers with rows
in the mapping source data. The transformation does not perform match analysis when you select the
option.
The transformation replaces rows in the index tables with rows from the mapping source data if the
rows share sequence identifiers. The transformation does not add rows to the index. The transformation
can include the rows that it does not add in the match analysis.
Transformations 223
For more information, see the "Match Transformations in Identity Analysis" chapter of the Informatica 10.0
Developer Transformation Guide.
For more information, see the "Match Transformations in Identity Analysis" chapter of the Informatica 10.0
Developer Transformation Guide.
Absent
The index data store does not contain data for the current record.
Invalid
The transformation cannot analyze the current record. For example, the transformation cannot generate
index data for the record because the key field on the Match Type tab is not compatible with the record
data.
Removed
The transformation removes the index data for the record from the index data store.
Updated
The transformation updates the rows in the persistent data store with index data from the
transformation input record. The transformation input data and the persistent index data have common
sequence identifiers.
For more information, see the "Match Transformation" chapter of the Informatica 10.0 Developer
Transformation Guide.
Parameter Usage
Effective in version 10.0, you can use parameters to set the following options on the Match transformation:
For more information, see the "Mapping Parameters" chapter of the Informatica 10.0 Developer Mapping
Guide.
Sequence ID Port
Effective in version 10.0, the Match transformation output ports include a Sequence ID port when you
configure the transformation to read a persistent index store. The transformation uses the sequence
identifier values to track the index data through the different stages of the match analysis.
For more information, see the "Match Transformation" chapter of the Informatica 10.0 Developer
Transformation Guide.
Effective in version 10.0, you can parameterize the connection for an SQL transformation. Define the
parameter in the mapping. Then, assign the parameter to the Connection Name in the SQL transformation
run-time properties.
For more information, see the SQL Transformation chapter in the Informatica 10.0 Transformation Guide.
Effective in version 10.0, you can add dynamic ports to some transformations. You can also parameterize
which input ports to link to ports from an upstream transformation. You can configure port selectors to
reference multiple ports in transformation logic.
The transformations contain the following new tabs in the Properties view:
Group By
The Aggregator transformation, the Rank transformation, and the Sorter transformation require that you
configure groups of ports. You can now configure the groups on a Group By tab. You can define groups
by selecting ports or you can configure parameters that contain port lists. The Group By tab provides
flexibility when you configure the transformations with generated ports.
Port Selector
You can reference multiple ports in transformation logic. Define a port selector, which is an ordered list
of ports. You can use reference port selectors in dynamic expressions, join conditions, or lookup
conditions. When you define a port selector, you can include or exclude transformation ports based on
the port name, the port type, or a pattern of text characters.
Run-time Linking
When you configure transformations in a dynamic mapping, you can set parameters or link policies that
determine which ports to link between transformations. Configure run-time linking to link dynamic ports
to static ports. You can configure a link policy to link ports by name. You can configure an InputLinkSet
parameter to specify the names of the of ports to link at run time.
Workflows
This section describes new workflow features in version 10.0.
Informatica Workflows
This section describes new features in Informatica workflows in version 10.0.
Workflows 225
Parallel Execution of Workflow Tasks
Effective in 10.0 Update 1, the Data Integration Service can run tasks on multiple sequence flows in a
workflow in parallel. To create the parallel sequence flows, add Inclusive gateways to the workflow in the
Developer tool.
Use an Inclusive gateway to split a sequence flow into multiple sequence flows. The Data Integration Service
runs the objects on every branch with a sequence flow condition that evaluates to true. The Data Integration
Service runs the objects on each branch concurrently. Use another Inclusive gateway to merge the sequence
flows into a single sequence flow. When the objects on all branches are complete, the Data Integration
Service passes the data from the second Inclusive gateway to the next object in the workflow.
You can add one or more instances of any type of task to a sequence flow between two Inclusive gateways.
You cannot add a Human task or a Voting task to more than one sequence flow between two Inclusive
gateways.
For more information, see the Informatica 10.0 Update 1 Developer Workflow Guide.
Mapping Tasks
Effective in version 10.0, Informatica has the following new features for Mapping tasks:
You can configure the directory where the Data Integration Service writes the Mapping task log. By
default, the Data Integration Service writes the Mapping task log file in the directory defined by the
system parameter, LogDir. The default location is disLogs/mappingtask. You can configure a different
directory for the Mapping task log file in the Mapping task Advanced properties. You can parameterize
the log file directory.
You can configure a file name for the Mapping task log file. The Data Integration Service appends the file
name to the information in the Masking Task Log File Directory field. It appends the log file name to a
UID and time stamp or to a mapping run number, based on how you choose to save the log file. You can
parameterize the log file name. Configure the log file name in the Mapping task Advanced properties.
You can save the Mapping task log file by timestamp or by the number of mapping task runs. The suffix
of the mapping task log file name reflects the option you select. You can configure how many log files to
save.
Java classpath
You can enter the classpath to add to the beginning of the system classpath when the Data Integration
Service runs the mapping task. Enter a Java classpath in the Advanced properties if you use third-party
Java packages, built-in Java packages, or custom Java packages in a Java transformation.
Effective in version 10.0, you can view which objects in a mapping use a specific parameter. Select a
parameter on the Mapping task Input tab, and click Parameter Usage.
Custom properties
You can define custom properties for a Mapping task and configure the property values. You can also
parameterize a custom property.
For more information, see the Informatica 10.0 Developer Workflow Guide.
Changes (10.0)
This chapter includes the following topics:
• Installation, 227
• Application Services, 228
• Big Data, 234
• Business Glossary, 234
• Command Line Programs, 235
• Domain, 236
• Informatica Administrator, 236
• Informatica Analyst, 238
• Informatica Developer, 240
• Mappings, 242
• Metadata Manager, 244
• PowerCenter, 246
• PowerExchange Adapters, 247
• Reference Data, 249
• Rule Specifications, 250
• Security, 250
• Sources and Targets, 251
• Transformations, 251
• Workflows, 254
Installation
This section describes changes to the Informatica installation in version 10.0.
227
Changed Support
Effective in version 10.0, Informatica implemented the following changes in support that affect upgrade:
Windows 32-bit Dropped support for application Migrate to a supported operating system before you
services and for the Developer tool upgrade.
For more information about product requirements and supported platforms, see the Product Availability
Matrix on Informatica Network:
[Link]
Application Services
This section describes changes to application services in version 10.0.
Analyst Service
This section describes changes to Analyst Service features in version 10.0.
Stop Mode
Effective in version 10.0, the Analyst Service has complete, abort, and stop modes to disable the Analyst
Service. Select the stop mode to stop all jobs, and then disable the Analyst Service.
Previously, only complete and abort modes were available to disable the service.
For more information, see the Analyst Service chapter in theInformatica 10.0 Application Service Guide.
Email Server
Effective in version 10.0, you can no longer configure an email server for the Data Integration Service. The
email server properties for the Data Integration Service are removed. Scorecard notifications use the email
server configured for the domain. Workflow notifications use the email server configured for the Email
Service. Workflow notifications include emails sent from Human tasks and Notification tasks in workflows.
Previously, scorecard and workflow notifications used the email server configured for the Data Integration
Service.
The upgrade determines the email server to use based on the following notification types:
Scorecard notifications use the email server configured for the domain. If you did not configure SMTP
for the domain in the previous version, the upgraded domain uses the email server configured for the
first Data Integration Service encountered during the upgrade. If you configured SMTP for the domain in
the previous version, the upgraded domain continues to use that email server.
The following email server properties available on the Data Integration Service in previous versions are
not available on the domain. You can no longer configure these properties for scorecard notifications:
Before you send scorecard notifications in version 10.0, verify that SMTP is correctly configured for the
domain. To use the same email server configured for the Data Integration Service in previous versions,
record the Data Integration Service values before upgrading.
Workflow notifications
Workflow notifications use the email server configured for the Email Service.
The following email server properties available on the Data Integration Service in previous versions are
not available on the Email Service. You can no longer configure these properties for workflow
notifications:
Before you send workflow notifications in version 10.0, configure an email server for the Email Service,
and then enable the Email Service. To use the same email server configured for the Data Integration
Service in previous versions, record the Data Integration Service values before upgrading.
For more information about configuring SMTP for the domain, see the "Domain Management" chapter in the
Informatica 10.0 Administrator Guide.
For more information about the Email Service, see the "System Services" chapter in the Informatica 10.0
Application Service Guide.
Execution Options
Effective in version 10.0, you configure the following execution options on the Properties view for the Data
Integration Service:
• Home Directory
• Temporary Directories
• Cache Directory
• Source Directory
• Target Directory
• Rejected Files Directory
Previously, you configured the execution options on the Processes view for the Data Integration Service. You
could configure the execution options differently for each node where a service process ran.
If you configured the execution options differently for each service process in a previous version, the upgrade
determines the version 10.0 values based on the following situations:
If the option defines a maximum integer value, the highest value defined for all processes is used as the
Data Integration Service value on the Properties view. If the option defines a string value, the value
defined for the first node encountered during the upgrade is used as the Data Integration Service value
on the Properties view.
The value defined on the Processes view for a node is used as the compute override on the Compute
view for the same node. The value defined for the first node encountered during the upgrade is used as
the Data Integration Service value on the Properties view.
For more information about the execution options, see the "Data Integration Service" chapter in the
Informatica 10.0 Application Service Guide.
The upgraded service uses the version 10.0 default value for each module. If you changed the default value
of Maximum Session Size in a previous version, you must change the value of Maximum Memory Per
Request after you upgrade.
Runs jobs in the Data Integration Service process. Configure when you run SQL data service and web
service jobs on a single node or on a grid where each node has both the service and compute roles. SQL
data service and web service jobs typically achieve better performance when the Data Integration
Service runs jobs in the service process.
Runs jobs in separate DTM processes on the local node. Configure when you run mapping, profile, and
workflow jobs on a single node or on a grid where each node has both the service and compute roles.
When the Data Integration Service runs jobs in separate local processes, stability increases because an
unexpected interruption to one job does not affect all other jobs.
Runs jobs in separate DTM processes on remote nodes. Configure when you run mapping, profile, and
workflow jobs on a grid where nodes have a different combination of roles.
When the Data Integration Service runs jobs in separate remote processes, stability increases because
an unexpected interruption to one job does not affect all other jobs. In addition, you can better use the
resources available on each node in the grid. When a node in a Data Integration Service grid has the
compute role only, the node does not have to run the service process. The machine uses all available
processing power to run mappings.
Previously, you set the Launch Jobs in Separate Processes property to true to run jobs in the Data Integration
Service process. You set the property to false to run jobs in separate DTM processes on the local node.
For more information about running jobs in separate processes, see the "Data Integration Service
Management" chapter in the Informatica 10.0 Application Service Guide.
Effective in version 10.0, you select the Workflow Orchestration Service Module to enable the Data
Integration Service to run workflows.
Effective in version 10.0, the Workflow Orchestration Service Module runs all tasks in a workflow.
Previously, the Workflow Service Module ran all workflow tasks except Human tasks. The Human Task
Service Module ran any Human task in a workflow.
Workflow database replaces the Model repository and Human task database as workflow metadata store
Effective in version 10.0, a single database stores all run-time metadata for workflows, including Human
task instance metadata. Select the workflow database connection on the Data Integration Service.
Previously, you selected a database to store Human task metadata on the Data Integration Service. The
Model repository stored all other run-time metadata for workflows.
Previously, more than one user was allowed to open and edit an object. Only the last user who tried to save
the object received a notification that the object had been changed by another user.
If the Model repository is integrated with a version control system, you must check out an object before you
edit it.
For more information, see the "Model Repository" chapter in the Informatica 10.0 Developer Tool Guide.
For more information, see the "Model Repository" chapter in the Informatica 10.0 Developer Tool Guide.
SAP BW Service
This section describes changes to the SAP BW Service in version 10.0.
To create an SAP BW Service for PowerCenter, log in to Informatica Administrator. In the Domain Navigator,
right-click the domain, and click Actions > New > PowerCenter SAP BW Service.
Previously, you clicked Actions > New > SAP BW Service to create an SAP BW Service for PowerCenter.
Note: Effective in version 10.0, the SAP BW Service option is reserved for creating an SAP BW Service for the
Developer tool.
For more information, see the "SAP BW Service" chapter in the Informatica 10.0 Application Services Guide.
Hive Environment
Effective in version 10.0, the Hive environment no longer appears as a run-time or validation environment in
the Developer tool user interface. The Hive environment is changed to the Hive engine that uses Hadoop
technology for processing batch data such as MapReduce or Tez.
For more information, see the Informatica 10.0 Big Data Edition User Guide.
Previously, you had to download and manually install the JCE policy file for AES encryption.
Kerberos Authentication
Effective in version 10.0, a Hadoop cluster cannot use only an MIT key distribution center (KDC) for Kerberos
authentication. Hadoop clusters can use a Microsoft Active Directory KDC or an MIT KDC connected to Active
directory with a one-way cross realm trust.
Business Glossary
This section describes changes to Business Glossary in version 10.0.
Relationship View
Effective in version 10.0, the relationship view has the following changes:
For more information, see the "Finding Glossary Content" chapter in the Informatica 10.0 Business Glossary
Guide.
Asset Phase
Effective in version 10.0, the asset phase has the following changes:
For more information, see the Informatica 10.0 Business Glossary Guide.
Library Workspace
Effective in version 10.0, the Library workspace has the following changes:
Sort Assets
When you view the assets by asset type you can sort Glossary assets by status and phase in the Library
workspace. Previously, you could not sort by the status and phase of the asset.
Find Option
When you look up assets by glossary, the option to enter search strings in the filter panel is no longer
available. Previously, you could search for assets when you look up assets by glossary.
For more information, see the Informatica 10.0 Business Glossary Guide.
When you export a glossary, you now have an option to include attachments and audit history. The Analyst
tool generates a .zip file when you export the audit history or attachments along with Glossary assets.
For more information, see the "Glossary Administration" chapter in the Informatica 10.0 Business Glossary
Guide.
Command Description
Logs
Effective in version 10.0, the default location for system logs is <Informatica installation directory>/
logs/<node name>/.
The domain stores application services logs and system logs in the default location. You can change the
default directory path for logs with the System Log Directory option. You can use this option with any of the
following commands:
• infasetup DefineDomain
• infasetup DefineGatewayNode
• infasetup DefineWorkerNode
• infasetup UpdateGatewayNode
• infasetup UpdateWorkerNode
Previously, the domain stored application services logs and system logs in different locations. The default
directory for system logs was <Informatica installation directory>/tomcat/logs/.
For more information, see the "Log Management" chapter in the Informatica 10.0 Administrator Guide.
Log Format
Effective in version 10.0, all logs consistently contain the following information by default:
• Thread name.
• Timestamp, in milliseconds.
Previously, this information was not consistent in logs. For example, some logs did not contain timestamp
information, and of those that did, the timestamp format was not consistent.
For more information, see the "Log Management" chapter in the Informatica 10.0 Administrator Guide.
Previously the DTM stored the log files in a folder named builtinhandlers.
Informatica Administrator
This section describes changes to the Administrator tool in version 10.0.
Domain tab
Effective in version 10.0, the Domain tab is renamed the Manage tab.
The Manage tab includes the Domain and Schedules views. Use the Domain view to view and manage
the status and resource consumption of the domain. Use the Schedules view to create and manage
reusable schedules for deployed mappings and workflows.
The following image shows the Domain view on the Manage tab:
Dependency graph
The dependency graph is moved from the Services and Nodes view to the Domain view. To access the
dependency graph, click the Actions menu for the domain, a service, or a node, and then choose View
Dependencies
Global Settings
Global Settings are moved from the Monitor tab, formerly Monitoring tab, to the Services and Nodes
view. The Global Settings are renamed Monitoring Configuration and are a view in the Services and
Nodes view.
Overview views
The Overview views for the domain and folders in the Services and Nodes view are removed. They are
replaced by the Domain view on the Manage tab.
Global Settings
Global Settings have the following changes:
• Global Settings are moved from the Monitor tab Actions menu to the Manage tab. Configure global
settings on the Monitoring Configuration view on the Services and Nodes view.
• The Number of Days to Preserve Historical Data option is renamed Preserve Summary Historical Data.
Minimum is 0. Maximum is 366. Default is 180.
• The Date Time Field option is renamed Show Milliseconds in Date Time Field.
Jobs
Jobs that users deploy from the Developer and Analyst tools are called ad hoc jobs. Ad hoc jobs include
previews, mappings, reference tables, enterprise discovery profiles, profiles, and scorecards. Previously, ad
hoc jobs were called jobs.
Navigation
The Monitoring tab is renamed the Monitor tab. Object monitoring is moved to the Execution Statistics view.
Preferences
Preferences in the Monitor tab Actions menu is renamed Report and Statistic Settings.
For more information, see the "Monitoring" chapter in the Informatica 10.0 Administrator Guide.
Informatica Analyst
This section describes changes to the Analyst tool in version 10.0.
Profiles
Effective in version 10.0, profiles in the Analyst tool have the following changes:
Column Profile
Effective in version 10.0, you can create a column profile with the Specify General Properties, Select Source,
Specify Settings, and Specify Rules and Filters steps in the profile wizard.
Previously, you created a column profile with the Step 1 of 6 through Step 6 of 6 steps in the profile wizard.
For more information about column profile, see the "Column Profiles in Informatica Analyst" chapter in the
Informatica 10.0 Data Discovery Guide.
Previously, the profile results were displayed in Column Profiling, Properties, and Data Preview views.
For more information about column profile results, see the "Column Profile Results in Informatica Analyst"
chapter in the Informatica 10.0 Data Discovery Guide.
Previously, you could click Actions > Edit to select and edit one of the options.
For more information about column profile, see the "Column Profiles in Informatica Analyst" chapter in the
Informatica 10.0 Data Discovery Guide.
Discovery Workspace
Effective in version 10.0, you can click Discovery workspace > Profile, and choose to create a single source
profile or enterprise discovery profile in the profile wizard.
Previously, you had to click Discovery workspace > Data Object Profile to create a profile, or click Discovery
workspace > Enterprise Discovery Profile to create an enterprise discovery profile.
For more information about column profile, see the "Column Profiles in Informatica Analyst" chapter in the
Informatica 10.0 Data Discovery Guide.
New Option
Effective in version 10.0, you can click New > Profile in the header area, and choose to create a single source
profile or enterprise discovery profile in the profile wizard.
Previously, you had to click New > Data Object Profile to create a profile, or click New > Enterprise Discovery
Profile to create an enterprise discovery profile.
For more information about column profile, see the "Column Profiles in Informatica Analyst" chapter in the
Informatica 10.0 Data Discovery Guide.
Create a Rule
Effective in version 10.0, you can create, add, or delete rules for a profile in the profile wizard.
Previously, you had to click Actions > Edit > Column Profiling Rules to add, delete, or create rules for the
profile.
For more information about rules, see the "Rules in Informatica Analyst" chapter in the Informatica 10.0 Data
Discovery Guide.
For more information about column profiles, see the "Column Profiles in Informatica Analyst" chapter in the
Informatica 10.0 Data Discovery Guide.
Filters
Effective in version 10.0, all the filters that you create for a profile are applicable to all the columns and data
domains in the profile and can be reused in the scorecard that you create on the profile.
For more information about filters, see the "Filters in Informatica Analyst" chapter in the Informatica 10.0
Data Discovery Guide.
Sampling Options
Effective in version 10.0, the sampling option is applicable to both column profile and data domain discovery.
Previously, you could select different sampling options for the column profile and data domain discovery.
Scorecards
This section describes changes to scorecards in the Analyst tool.
Notifications
Effective in version 10.0, scorecards send notifications using the email server configuration in the domain
SMTP Configuration properties.
Previously, scorecards used the email server configuration in the Data Integration Service properties.
Scorecard URL
Effective in version 10.0, when you add a scorecard URL to the source code of external applications or web
portals and access the URL, you need to log in to Informatica Analyst to view the scorecard due to security
reasons.
Previously, scorecard URL for external applications did not prompt for a login access.
Informatica Developer
This section describes changes to the Developer tool in version 10.0.
Previously, the Deploy dialog box gave you a choice of “Update” or “Replace.” The “Retain state information”
check box replaces the “Update” check box, and is selected by default.
If you select "Retain state information," you retain run-time settings and properties in the deployed
application. If you clear “Retain state information,” you discard the state of these settings and properties in
the deployed application.
Previously, you configured the format and run-time properties for a flat file data object in the Read and Write
views. In the Read view, you selected the source transformation to configure format properties. You selected
the Output transformation to configure run-time properties. In the Write view, you selected the Input
transformation to configure run-time properties. You selected the target transformation to configure format
properties.
• You can use the ODBC connection type to connect to Microsoft SQL Server.
• You can upgrade your existing connections by using the pmrep and infacmd commands. When you run the
upgrade command, all the existing connections are upgraded.
• The existing Microsoft SQL Server connection is deprecated and support will be dropped in the next major
release. You can run the existing mappings without manual updates. If you are using SSL connections,
you must select the provider type as ODBC in the connection, and configure SSL in the DSN.
Previously, you used the same editor to edit logical data objects and logical data object models.
For more information, see "Logical View of Data" chapter in the Informatica 10.0 Developer Tool Guide.
Previously, you clicked File > New to create logical data object mappings.
Mappings
This section describes changes to mappings in version 10.0.
Parameter Files
Effective in version 10.0, the parameter file format is changed. The parameter file no longer contains
transformation parameters.
You can run mappings and workflows with the parameter files from previous versions. When you run a
mapping or workflow with the previous version parameter file, the Data Integration Service converts the
parameter file to the Informatica 10.0 version.
When you create a parameter file with the infacmd listMappingParams command, the Data Integration
Service creates a mapping parameter file without transformation parameters. The infacmd
listWorkflowParams command creates a workflow parameter file without transformation parameters.
In previous versions, when you created parameter files, the parameter files contained transformation
parameters.
For more information about parameter files, see the Mapping Parameters chapter of the Informatica
Developer Mapping Guide.
Partitioned Mappings
This section describes changes to partitioned mappings in version 10.0.
Previously, the Data Integration Service calculated a single actual parallelism value and used that same value
for each mapping pipeline stage. The service calculated the actual parallelism value based on the maximum
parallelism values and on the maximum number of partitions for all flat file, IBM DB2 for LUW, or Oracle
sources ready by a mapping.
Partitioned Targets
Effective in version 10.0, if a mapping establishes order with a sorted relational source or a Sorter
transformation, the Data Integration Service can use multiple threads to run the mapping. To maintain order
in a partitioned mapping, you must specify that targets maintain the row order in the advanced properties for
the Write transformation. When you configure Write transformations to maintain row order, the Data
Integration Service uses a single thread to write to the target.
Previously, if a mapping included a sorted relational source, the Data Integration Service used one thread to
process each mapping pipeline stage. If a mapping included a Sorter transformation, the Data Integration
Service used one thread to process the Sorter transformation and all downstream mapping pipeline stages.
If you upgrade from an earlier version, all existing Write transformations are configured to maintain row
order. The Data Integration Service uses a single thread to write to the target to ensure that any order
established in the mapping is maintained. If an upgraded mapping does not establish an order, you can clear
the Maintain Row Order property in the advanced properties for a Write transformation so that the Data
Integration Service can use multiple threads to write to the target.
You can configure a Java transformation to maintain the row order of the input data by selecting the
Stateless advanced property for the transformation.
Previously, you cleared the stateless property if the Java transformation needed to be processed with one
thread. When the stateless property was cleared, the Data Integration Service did not create partitions for the
entire mapping.
Previously, when a mapping contained a transformation that did not support partitioning, the Data Integration
Service did not create partitions for the mapping. The service used one thread to process each mapping
pipeline stage.
For more information about partitioned mappings, see the "Partitioned Mappings" chapter in the Informatica
10.0 Developer Mapping Guide.
Pushdown Optimization
Effective in version 10.0, pushdown optimization is removed from the mapping optimizer level. To configure
a mapping for pushdown optimization you must select a pushdown type in the mapping run-time properties.
Previously, the Data Integration Service applied pushdown optimization by default with the normal or full
optimizer level.
For more information, see the Informatica 10.0 Developer Mapping Guide.
Mappings 243
Run-time Properties
Effective in version 10.0, configure Validation Environments on the Run-time tab. The mapping Properties
view no longer contains an Advanced properties tab.
Previously, you configured the Validation Environments property on the Advanced properties tab.
For more information, see the Informatica 10.0 Developer Mapping Guide.
Metadata Manager
This section describes changes to Metadata Manager in version 10.0.
Previously, the PowerCenter Integration Service used native connectivity to connect to the Informix database.
You could create and load Informix resources only when the Informatica domain ran on 32-bit Windows.
For more information about configuring Informix resources, see the "Database Management Resources"
chapter in the Informatica 10.0 Metadata Manager Administrator Guide.
Therefore, you no longer need to perform the following tasks when you configure a Microsoft SQL Server
resource:
• On Windows, you do not need to install the Microsoft SQL Server Native Client.
• On UNIX, you do not need to create a data source for the Microsoft SQL Server database in the [Link]
file.
Note: If you previously created a data source in the [Link] file, you can still use it by entering the data
source name as the connect string.
• You do not need to set the ODBC Connection Mode property for the Metadata Manager Service in the
Administrator tool. This property is removed because the connection mode for Microsoft SQL Server is
always ODBC.
Previously, the PowerCenter Integration Service used native connectivity on Windows and ODBC connectivity
on UNIX.
For more information about configuring Microsoft SQL Server resources, see the "Database Management
Resources" chapter in the Informatica 10.0 Metadata Manager Administrator Guide.
• When you view metadata details for a session task instance, Metadata Manager lists the mappings that
the session task instance runs as related catalog objects but not in the impact summary.
Previously, Metadata Manager listed the mappings as related catalog objects and in the upstream and
downstream impact summary.
• When you view metadata details for a mapplet instance that contains a source definition, Metadata
Manager does not list the parent mapping in the impact summary.
Previously, Metadata Manager listed the parent mapping in the downstream impact summary.
• When you view metadata details for a mapplet instance that does not contain a source, Metadata
Manager does not display an impact summary.
Previously, Metadata Manager displayed an impact summary for mapplet instances that do not contain a
source.
• When you view metadata details for an Input or Output transformation instance in a mapplet, Metadata
Manager does not display an impact summary.
Previously, Metadata Manager displayed an impact summary for Input and Output transformation
instances in a mapplet.
• When you view metadata details for a Source Qualifier instance in a mapplet that contains a source
definition, Metadata Manager does not display the parent mapping in the impact summary.
Previously, Metadata Manager displayed the parent mapping in the impact summary.
For more information about the impact summary, see the "Viewing Metadata" chapter in the Informatica 10.0
Metadata Manager User Guide.
For more information about the Max Concurrent Resource Load property, see the "Metadata Manager
Service" chapter in the Informatica 10.0 Application Service Guide.
Search
Effective in version 10.0, Metadata Manager displays the advanced search criteria and the search results in
the Search Results panel at the bottom of the Browse tab. The Search Results panel allows you to view the
metadata catalog, business glossaries, shortcuts, or data lineage diagram while you perform a search. You
can resize, minimize, and restore the Search Results panel.
Previously, Metadata Manager displayed the advanced search criteria and the search results on a separate
tab.
For more information about searches, see the "Searching Metadata" chapter in the Informatica 10.0 Metadata
Manager User Guide.
The following Metadata Manager log files are stored in the directory <Informatica installation
directory>\logs\<node name>\services\MetadataManagerService\<Metadata Manager service name>:
For more information about Metadata Manager log files, see the Informatica 10.0 Metadata Manager
Administrator Guide.
To export and import business glossary assets and templates or to customize business glossaries, use the
Analyst tool.
Profiling
Effective in version 10.0, Metadata Manager does not extract profiling information from relational metadata
sources.
PowerCenter
This section describes changes to PowerCenter in version 10.0.
For more information, see the Informatica 10.0 Application Services Guide.
PurgeVersion command
• Effective in version 10.0, you can use pmrep purgeVersion -c with or without the -p option.
When you use the -c option with the -p option, the output lists the object versions that purge, then lists
which object versions are contained in deployment groups.
When you use the -c option without the -p option, the command does not purge versions that are part of
deployment groups.
Previously, when you used the -c option, the -p option was required.
• Effective in version 10.0, if an object version is a member of a deployment group, the version will not
purge.
When you use pmrep purgeVersion with the -k option, the results display all versions that do not purge,
and the reason the version does not purge.
When a version will not be purged because it is in a deployment group, the reason lists only the first
deployment group that causes the object not to purge.
Previously, the inclusion of a version in a deployment group did not affect whether or not it would be
purged.
For more information, see the Informatica 10.0 Command Reference.
For more information, see the Informatica 10.0 Data Discovery Guide..
PowerExchange Adapters
This section describes changes to PowerExchange adapters in version 10.0.
SAP Connections
The SAP connections that you created in versions earlier than 10.0 are deprecated. The deprecated
connection category is named as SAP (Deprecated) under Enterprise Application.
Informatica will drop support for the deprecated connections in a future release. You can run mappings
with the deprecated connections and also create a new deprecated connection. However, Informatica
recommends that you create a new SAP connection by using the SAP category under Enterprise
Application.
The SAP data objects that you created in versions earlier than 10.0 are deprecated. The deprecated data
object type is named as SAP Data Object (Deprecated).
Informatica will drop support for the deprecated data objects in a future release. You can run mappings
with the existing data objects and also create a new deprecated data object. However, Informatica
recommends that you create a new data object of type SAP Table Data Object to read data from SAP
tables.
For more information, see the Informatica 10.0 PowerExchange for SAP NetWeaver User Guide.
Reference Data
This section describes changes to reference data operations in version 10.0.
Classifier Models
Effective in version 10.0, you view and manage the data in a classifier model in a single view in the Developer
tool.
Previously, you toggled between two views in the Developer tool to see all of the options on a classifier
model.
For more information, see the "Classifier Models" chapter of the Informatica 10.0 Reference Data Guide.
For more information, see the "Reference Data in the Developer Tool" chapter of the Informatica 10.0
Reference Data Guide.
Rule Specifications
This section describes changes in rule specifications in version 10.0.
• Effective in version 10.0, you create inputs and update the input properties in the Manage Global Inputs
dialog box.
Previously, you created and updated an input in the rule set that read the input.
• Effective in version 10.0, a rule set uses text indicators to describe the sequence in which data moves
through the rule statements.
Previously, a rule set used numbers to indicate the sequence.
• Effective in version 10.0, the Design workspace in the Analyst tool uses the term "generate" to identify to
the operation that creates a mapplet rule from a rule specification.
Previously, the Design workspace used the term "compile" to identify the operation.
• Effective in version 10.0, you can validate and generate a rule specification that contains unused inputs.
Previously, a rule specification that contained unused inputs was not valid.
• Effective in version 10.0, you can create and begin work on a rule specification in a single operation.
Previously, you created and opened a rule specification in separate operations.
For more information, see the Informatica 10.0 Rule Specification Guide.
Security
This section describes changes to security in Informatica version 10.0
Authentication
This section describes changes to authentication for the Informatica domain.
Effective in Informatica 10.0, single sign-on for an Informatica domain without Kerberos authentication has
the following changes:
Single sign-on with the Developer tool
When you open a web application client from the Developer Tool, you must log in to the web application.
Previously, you did not have to enter log in information for the web application.
You must log out from each web application client separately if you use the Administrator tool to open a
web application client. For example, if you use the Administrator tool to open the Analyst tool, you must
log out of the Administrator tool and the Analyst tool separately.
For more information, see the Informatica PowerCenter 10.0 Designer Guide.
Transformations
This section describes changed transformation behavior in version 10.0.
Informatica Transformations
This section describes the changes to the Informatica transformations in version 10.0.
Previously, you entered the country name or the three-character ISO country code as the parameter value.
Aggregator Transformation
Effective in version 10.0, you define the group by ports on the Group By tab of the Aggregator transformation
Properties view.
You can parameterize the ports you want to include in the aggregator group with a port list parameter. You
can include dynamic ports in the Aggregator transformation.
Previously, you selected group by ports on the Ports tab of the transformation Properties view.
For more information about the Aggregator transformation, see the Aggregator Transformation chapter in the
Informatica 10.0 Developer Transformation Guide.
Match Transformation
Effective in Informatica 10.0, the Match transformation displays the following changes in behavior:
• Effective in version 10.0, the Match transformation generates unique cluster ID values across all threads
in the same process.
Previously, the Match transformation generated the cluster ID values independently on each thread.
• Effective in version 10.0, you select the following option to connect the Match transformation to a
persistent store of identity index data:
Identity Match with Persistent Record ID
Rank Transformation
Effective in version 10.0, you define the rank port and the group by ports on the Rank tab of the
transformation Properties view.
You can parameterize the rank port with a port parameter. You can parameterize the group by ports with a
port list parameter. You can include dynamic ports in the Rank transformation.
Previously, you selected the rank port and the group by ports on the Ports tab of the transformation
Properties view.
For more information about the Rank transformation, see the Informatica 10.0 Developer Transformation
Guide.
Sorter Transformation
This section describes changes to the Sorter transformation in version 10.0.
Cache Size
Effective in version 10.0, the Sorter transformation pages fewer cache files to the disk which improves
performance. If the configured cache size is too small for the Sorter transformation, the Data Integration
Service processes some of the data in memory and only stores overflow data in cache files.
Previously, if the cache size was too small, the Data Integration Service paged all the cache files to the disk.
You can parameterize the ports you want to include in the sort key with a sort list parameter. You can include
dynamic ports in the Sorter transformation.
Previously, you selected ports for sort keys on the Ports tab of the transformation Properties view. You
selected to create distinct rows on the Advanced tab.
Transformations 253
The following image shows the Sort tab:
For more information, see the Informatica 10.0 Developer Transformation Guide.
Workflows
This section describes changed workflow behavior in version 10.0.
Informatica Workflows
This section describes the changes to Informatica workflow behavior in version 10.0.
Command Tasks
Effective in version 10.0, a Command task does not fail when the working directory that the task specifies is
not valid.
Previously, a Command task failed when the working directory was not valid.
For more information, see the Informatica 10.0 Developer Workflow Guide.
Previously, you might configure different Data Integration Services to run Human tasks and to run the other
stages in a workflow.
Effective in version 10.0, the Workflow Orchestration Service module on the Data Integration Service runs all
stages in a workflow.
Note: Complete all Human tasks that you run in an earlier version of Informatica before you upgrade to
version 10.0.
For more information, see the Informatica 10.0 Application Service Guide.
Human Tasks
Effective in version 10.0, a Human task does not stop a workflow when the exceptionLoadCount input value
on the task is less than 1. When the exceptionLoadCount input value is less than 1, the Human task
completes but generates no data for Analyst tool users.
Previously, a Human task stopped a workflow when the exceptionLoadCount input value was less than 1.
Effective in version 10.0, a Human task sends email notifications using the email server configuration in the
Email Service properties.
Previously, a Human task sent email notifications using the email server configuration in the Data Integration
Service properties.
Effective in version 10.0, you cannot move from one step to another in a Human task if you cancel the
workflow in the following scenario:
For more information, see the Informatica 10.0 Developer Workflow Guide.
Mapping Tasks
Effective in version 10.0, the Data Integration Service creates a log file for each instance of a Mapping task
that runs in a workflow instance. If the Mapping task restarts following an interruption in an earlier workflow
run, the Data Integration Service creates a log file for the restarted task.
Previously, the Data Integration Service stored log data for all instances of a Mapping task that ran in a
workflow instance in a single file.
Notification Tasks
Effective in version 10.0, a Notification task sends email notifications using the email server configuration in
the Email Service properties.
Previously, a Notification task sent email notifications using the email server configuration in the Data
Integration Service properties.
For more information, see the Informatica 10.0 Developer Workflow Guide.
Run-Time Metadata
Effective in version 10.0, the Data Integration Service stores all run-time metadata for a workflow in a set of
tables in a single database. You select the database connection as a Workflow Orchestration Service
property on the Data Integration Service.
Previously, the Data Integration Service stored run-time metadata for a workflow in the Model repository and
stored any Human task metadata in the Human task database. The Human task database is obsolete in
version 10.0.
Workflows 255
Note: You must create the workflow database contents before you run a workflow. To create the contents,
use the Actions menu options for the Data Integration Service in the Administrator tool.
For more information, see the Informatica 10.0 Application Service Guide.
Workflow Monitoring
Effective in version 10.0, a workflow can enter a completed state if a Command task or a Mapping task in the
workflow sequence fails to complete.
For example, a workflow can continue to run to completion if a Mapping task fails in one of the following
scenarios:
• You enabled the workflow for recovery, and you configured the Mapping task with a skip recovery
strategy.
• You did not enable the workflow for recovery.
Previously, a workflow entered a failed state if a Command task or a Mapping task failed during the workflow
run.
For more information, see the Informatica 10.0 Administrator Guide and the Informatica 10.0 Developer
Workflow Guide.
• Workflow names
• Task names
• Gateway names
• Workflow application names
• Workflow variable names
• Workflow parameter names
The XML 1.0 specification excludes a small number of characters and symbols from the names. If any name
contains a character or symbol that the specification excludes, the workflow fails to run.
Previously, the XML 1.0 specification did not determine the range of valid characters and symbols in
workflow names and associated object names.
If you upgrade to version 10.0 or later, edit any workflow or associated object name that contains a character
or a symbol that the XML 1.0 specification does not support.
For more information, see the Informatica 10.1 Upgrading from Version 9.5.1 Guide and the Informatica 10.1
Upgrading from Version 9.6.1 Guide.
Workflow Recovery
Effective in version 10.0, the Data Integration Service does not impose a limit on the number of attempts to
recover a workflow. The Administrator tool does not display the number of times that you try to recover the
workflow.
Previously, you configured a maximum number of recovery attempts in the Developer tool. The monitoring
features of the Administrator tool displayed the number of times that you tried to recover the workflow.
Previously, when you canceled a workflow, the workflow entered a Canceled state when the currently running
task ended.
For more information, see the Informatica 10.0 Administrator Guide and the Informatica 10.0 Developer
Workflow Guide.
Workflows 257
Chapter 19
• Mappings, 258
Mappings
This section describes release tasks for Mappings in version 10.0.
Parameter Precision
Effective in version 10.0, the size of a default parameter value must be less than or equal to the precision
specified for the parameter. In previous versions, if the parameter default value was greater than the
precision size, the Data Integration Service truncated the parameter default value and the mapping ran
successfully.
After the upgrade to 10.0 is complete, you must verify that the size of each parameter default value is less
than or equal to the precision specified for the parameter. If the parameter default value is greater than the
precision, update the default value or change the precision. Redeploy the mapping.
In version 10.0, if the size of the parameter default value is greater than the parameter precision, a mapping
fails with the following error:
The parameter [my_parameter] should have a default value length less than or equal to the
precision.
258
Part V: Version 9.6.1
This part contains the following chapters:
• New Features, Changes, and Release Tasks (9.6.1 HotFix 4), 260
• New Features, Changes, and Release Tasks (9.6.1 HotFix 3), 270
• New Features, Changes, and Release Tasks (9.6.1 HotFix 2), 279
• New Features, Changes, and Release Tasks (9.6.1 HotFix 1), 296
• New Features (9.6.1), 309
• Changes (9.6.1), 328
259
Chapter 20
260
Command Line Programs
This section describes new commands in version 9.6.1 HotFix 4.
Command Description
ListDomainCiphers Displays one or more of the following cipher suite lists used by the Informatica domain or a
gateway node:
Black list
Effective list
The list of cipher suites that the Informatica domain uses after you configure it with the
infasetup updateDomainCiphers command. The effective list supports cipher suites in the
default list and white list but blocks cipher suites in the black list.
White list
User-specified list of cipher suites that the Informatica domain can use in addition to the
default list.
You can specify which lists that you want to display.
For more information, see the "infacmd isp Command Reference" chapter Informatica 9.6.1 HotFix 4
Command Reference.
Command Description
ListDomainCiphers Displays one or more of the following cipher suite lists used by the Informatica domain or a
gateway node uses:
Black list
Default list
Effective list
The list of cipher suites that the Informatica domain uses after you configure it with the
infasetup updateDomainCiphers command. The effective list supports cipher suites in the
default list and white list but blocks cipher suites in the black list.
White list
User-specified list of cipher suites that the Informatica domain can use.
You can specify which lists that you want to display.
updateDomainCiphers Updates the cipher suites that the Informatica domain can use with a new effective list.
Command Description
For more information, see the "infasetup Command Reference" chapter in the Informatica 9.6.1 HotFix 4
Command Reference.
Connectivity
This section describes new connectivity features in version 9.6.1 HotFix 4.
For example, enter the following syntax in the metadata connection string URL:
262 Chapter 20: New Features, Changes, and Release Tasks (9.6.1 HotFix 4)
jdbc:informati[Link]//<host name>:<port>;DatabaseName=<database
name>;ischemaname=<schema_name1>|<schema_name2>|<schema_name3>
For more information, see the Informatica 9.6.1 HotFix 4 Developer Tool Guide and Informatica 9.6.1 HotFix 4
Analyst Tool Guide.
Exception Management
This section describes new exception management features in version 9.6.1 HotFix 4.
Effective in version 9.6.1 HotFix 4, you can configure the options in an exception task to search and
replace data values based on the data type. You can configure the options to search and replace data in
any column that contains date, string, or numeric data.
When you specify a data type, the Analyst tool searches for the value that you enter in any column that
uses the data type. You can find and replace any value that a string data column contains. You can
perform case-sensitive searches on string data. You can search for a partial match or a complete match
between the search value and the contents of a field in a string data column.
For more information, see the Exception Records chapter in the Informatica 9.6.1 HotFix 4 Exception
Management Guide.
Informatica Domain
This section describes new features to Informatica Domain.
Domain Reports
Effective in version 9.6.1 HotFix 4, the License Management Report includes the consumed cores property.
This property indicates the number of cores on the machine.
For more information about the License Management Report, see the "Domain Reports" chapter in the
Informatica 9.6.1 HotFix 4 Administrator Guide.
Informatica Transformations
This section describes new Informatica transformation features in version 9.6.1 HotFix 4.
The Address Validator transformation contains additional address functionality for the following countries:
Ireland
Effective in version 9.6.1 HotFix 4, you can return the eircode for an address in Ireland. An eircode is a
seven-character code that uniquely identifies an Ireland address. The eircode system covers all
residences, public buildings, and business premises and includes apartment addresses and addresses in
rural townlands.
To return the eircode for an address, select a Postcode port or a Postcode Complete port.
France
Effective in version 9.6.1 HotFix 4, address validation uses the Hexaligne 3 repository of the National
Address Management Service to certify a France address to the SNA standard.
Germany
Effective in version 9.6.1 HotFix 4, you can retrieve the three-digit street code part of the Frachtleitcode
or Freight Code as an enrichment to a valid Germany addresses. The street code identifies the street
within the address.
To retrieve the street code as an enrichment to verified Germany addresses, select the Street Code DE
port. Find the port in the DE Supplementary port group.
South Korea
Effective in version 9.6.1 HotFix 4, you can verify older, lot-based addresses and addresses with older,
six-digit post codes in South Korea. You can verify and update addresses that use the current format, the
older format, and a combination of the current and older formats. A current South Korea address has a
street-based format and includes a five-digit post code. A non-current address has a lot-based format
and includes a six-digit post code.
To verify a South Korea address in an older format and to change the information to another format, use
the Address Identifier KR ports. You update the address information in two stages. First, run the address
validation mapping in batch or interactive mode and select the Address Identifier KR output port. Then,
run the address validation mapping in address code lookup mode and select the Address Identifier KR
input port. Find the Address Identifier KR input port in the Discrete port group. Find the Address Identifier
KR output port in the KR Supplementary port group.
To verify that the Address Validator transformation can read and write the address data, add the
Supplementary KR Status port to the transformation.
Informatica adds the Address Identifier KR ports, the Supplementary KR Status port, and the KR
Supplementary port group in version 9.6.1 HotFix 4.
Effective in version 9.6.1 HotFix 4, you can retrieve South Korea address data in the Hangul script and in
a Latin script.
United Kingdom
Effective in version 9.6.1 HotFix 4, you can retrieve delivery point type data and organization key data for
a United Kingdom address. The delivery point type is a single-character code that indicates whether the
address points to a residence, a small organization, or a large organization. The organization key is an
eight-digit code that the Royal Mail assigns to small organizations.
To add the delivery point type to a United Kingdom address, use the Delivery Point Type GB port. To add
the organization key to a United Kingdom address, use the Organization Key GB port. Find the ports in
the UK Supplementary port group. To verify that the Address Validator transformation can read and write
the data, add the Supplementary UK Status port to the transformation.
Informatica adds the Delivery Point Type GB port and the Organization Key GB port in version 9.6.1
HotFix 4.
For more information, see the Informatica 9.6.1 HotFix 4 Address Validator Port Reference.
Metadata Manager
This section describes new Metadata Manager features in version 9.6.1 HotFix 4.
264 Chapter 20: New Features, Changes, and Release Tasks (9.6.1 HotFix 4)
Application Properties
Effective in version 9.6.1 HotFix 4, you can configure new application properties in the Metadata Manager
[Link] file.
The following table describes new Metadata Manager application properties in [Link]:
Property Description
[Link] Maximum number of errors that the Metadata Manager Service can
encounter before the custom resource load fails.
[Link] Number of errors that the Metadata Manager Service writes to the in
memory cache and to the [Link] file in one batch when you load a custom
resource.
For more information about the [Link] file, see the "Metadata Manager Properties Files" appendix in
the Informatica 9.6.1 HotFix 4 Metadata Manager Administrator Guide.
For more information, see the Informatica 9.6.1 HotFix 4 Upgrading from Version 9.5.1 Guide.
PowerExchange Adapters
This section describes new PowerExchange adapter features in version 9.6.1 HotFix 4.
For more information, see the "Greenplum Sessions and Workflows" chapter in the Informatica 9.6.1 HotFix 4
PowerExchange for Greenplum User Guide for PowerCenter.
For more information, see the "Teradata PT API Sessions and Workflows" chapter in the Informatica 9.6.1
HotFix 4 PowerExchange for Teradata Parallel Transporter API User Guide for PowerCenter.
The Informatica domain uses an effective list of cipher suites that uses the cipher suites in the default and
whitelists but blocks cipher suites in the blacklist.
For more information, see the "Domain Security" chapter in the Informatica 9.6.1 HotFix 4 Security Guide.
Application Services
This section describes changes to Application Services in version 9.6.1 HotFix 4.
If you upgrade to version 9.6.1 HotFix 4, you can continue to use the Reporting and Dashboards Service.
Informatica recommends that you begin using a third-party reporting tool before Informatica drops support.
You can use the recommended SQL queries for building all the reports shipped with earlier versions of
PowerCenter.
If you install version 9.6.1 HotFix 4, you cannot create a Reporting and Dashboards Service. You must use a
third-party reporting tool to run PowerCenter and Metadata Manager reports.
For information about the PowerCenter Reports, see the Informatica PowerCenter Using PowerCenter Reports
Guide. For information about the PowerCenter repository views, see the Informatica PowerCenter Repository
Guide.
Informatica Domain
This section describes changes to Informatica Domain in version 9.6.1 HotFix 4.
Domain Reports
Effective in version 9.6.1 HotFix 4, the property cores in the License Management Report is renamed to cores
per socket. This property describes the number of cores for each socket on the machine.
266 Chapter 20: New Features, Changes, and Release Tasks (9.6.1 HotFix 4)
For more information about the License Management Report, see the "Domain Reports" chapter in the
Informatica 9.6.1 HotFix 4 Administrator Guide.
Informatica Installation
This section describes the changes to the Informatica Installer in version 9.6.1 HotFix 4.
Before you install or upgrade Informatica on AIX, HP-UX, or zLinux, you must first install the Java runtime
environment (JRE) and set the INFA_JRE_HOME environment variable. When you upgrade, remove the
INFA_JDK_HOME environment variable.
For more information, see the "Install the Java Runtime Environment" chapter in the Informatica 9.6.1 HotFix
4 Installation and Configuration Guide and the Informatica upgrade guides.
Informatica Transformations
This section describes changes to Informatica transformations in version 9.6.1 HotFix 4.
The Address Validator transformation contains the following updates to address functionality:
Effective in version 9.6.1 HotFix 4, the Address Validator transformation uses version 5.8.1 of the
Informatica Address Verification software engine. The engine enables the features that Informatica adds
to the Address Validator transformation in version 9.6.1 HotFix 4.
Previously, the transformation used version 5.7.0 of the Informatica AddressDoctor software engine.
Effective in version 9.6.1 HotFix 4, you can select Rooftop as a geocode data property to retrieve
rooftop-level geocodes for United Kingdom addresses.
Previously, you selected the Arrival Point geocode data property to retrieve rooftop-level geocodes for
United Kingdom addresses.
If you upgrade a repository that includes an Address Validator transformation, you do not need to
reconfigure the transformation to specify the Rooftop geocode property. If you specify rooftop geocodes
and the Address Validator transformation cannot return the geocodes for an address, the transformation
does not return any geocode data.
Support for unique property reference numbers in United Kingdom input data
Effective in version 9.6.1 HotFix 4, the Address Validator transformation has a UPRN GB input port and a
UPRN GB output port.
For more information, see the Informatica 9.6.1 HotFix 4 Address Validator Port Reference.
Metadata Manager
This section describes changes to Metadata Manager in version 9.6.1 HotFix 4.
Because the command line programs no longer accept security certificates that have errors, the
[Link] property is obsolete. The property no longer appears in the
[Link] files for mmcmd or mmRepoCmd.
For more information about certificate validation for mmcmd and mmRepoCmd, see the "Metadata Manager
Command Line Programs" chapter in the Informatica 9.6.1 HotFix 4 Metadata Manager Administrator Guide.
Changes to Security
This section describes changes to security in version 9.6.1 HotFix 4.
The changes affect secure communication within the Informatica domain, secure connections to web
application services, and connections between the Informatica domain to an external destination.
Metadata Manager
This section describes release tasks for Metadata Manager in version 9.6.1 HotFix 4.
268 Chapter 20: New Features, Changes, and Release Tasks (9.6.1 HotFix 4)
errors. The property that controls whether a command line program can accept security certificates that have
errors is removed.
• NO_AUTH. The command line program accepts the digital certificate, even if the certificate has errors.
• FULL_AUTH. The command line program does not accept a security certificate that has errors.
The NO_AUTH setting is no longer valid. The command line programs now only accept security certificates
that do not contain errors.
If a secure connection is configured for the Metadata Manager web application, and you previously set the
[Link] property to NO_AUTH, you must now configure a truststore file. To configure
mmcmd or mmRepoCmd to use a truststore file, edit the [Link] file that is associated
with mmcmd or mmRepoCmd. Set the [Link] property to the path and file name of the truststore
file.
For more information about the [Link] files for mmcmd and mmRepoCmd, see the
"Metadata Manager Command Line Programs" chapter in the Informatica 9.6.1 HotFix 4 Metadata Manager
Administrator Guide.
Business Glossary
This section describes new Business Glossary features in version 9.6.1 HotFix 3.
For more information, see the Informatica 9.6.1 HotFix 3 Business Glossary Guide.
For more information, see the Informatica 9.6.1 HotFix 3 Business Glossary Guide.
270
Create Hyperlinks from URLs
Effective in version 9.6.1 HotFix 3, you can create hyperlinks when you insert URLs in the Description, Usage
Context, Example, and Reference Table URL properties for business terms. You can link to assets from any
glossary.
For more information, see the Informatica 9.6.1 HotFix 3 Business Glossary Guide.
Effective in version 9.6.1 HotFix 3, you can query an SQL data service that contains datetime data from
Microsoft Access. When you configure the Informatica Data Services ODBC Driver, enter the following
parameter in the Optional Parameters field in the Configure Data Source to Informatica Data Services
dialog box:
APPLICATION=ACCESS
When you configure the ODBC driver with this parameter, the Data Integration Service uses the date/time
data type for Microsoft Access date data.
Informatica Transformations
This section describes new Informatica transformation features in version 9.6.1 HotFix 3.
Effective in version 9.6.1 HotFix 3, you can configure the Address Validator transformation to return a
code that uniquely identifies the neighborhood that contains a Belgium address. To return the code,
select the NIS Code output port. Find the port in the BE Supplementary port group.
The NIS Code port returns the five-digit NIS code that identifies the locality and a four-character code
that identifies the neighborhood within the locality. The national statistics directorate in Belgium defines
the codes.
To return the data on the NIS Code port, the Address Validator transformation reads supplementary
address reference data for Belgium. To verify that the Address Validator transformation can read the
supplementary data, add the Supplementary BE Status output port to the transformation. Informatica
adds the NIS Code port, the Supplementary BE Status port, and the BE Supplementary port group in
version 9.6.1 HotFix 3.
Support for Federal Information Addressing System identifiers in Russian Federation addresses
Effective in version 9.6.1 HotFix 3, you can configure the Address Validator transformation to return the
Federal Information Addressing System identifier for an address in the Russian Federation. To return the
identifier, select the FIAS ID output port. Find the port in the RU Supplementary port group.
The FIAS ID port returns up to 36 characters. The Federal State Statistics Service of the Russian
Federation maintains the identifier data.
To return the data on the FIAS ID port, the Address Validator transformation reads supplementary
address reference data for the Russian Federation. To verify that the Address Validator transformation
can read the supplementary data, add the Supplementary RU Status output port to the transformation.
Effective in version 9.6.1 HotFix 3, you can configure the Address Validator transformation to return the
unique property reference number for an address in Great Britain. The number uniquely identifies the plot
of land that contains an address in the United Kingdom. To return the unique property reference number,
select the UPRN output port. Find the port in the UK Supplementary port group.
The unique property reference number contains 12 digits. The Ordnance Survey of Great Britain
maintains the unique property reference numbers.
To return the data on the UPRN port, the Address Validator transformation reads supplementary address
reference data for the Great Britain. To verify that the Address Validator transformation can read the
supplementary data, add the Supplementary UK Status output port to the transformation. Informatica
adds the UPRN port in version 9.6.1 HotFix 3.
Ability to remove locality and province descriptors from China and Japan addresses
Effective in version 9.6.1 HotFix 3, you can configure the Address Validator transformation to remove
locality descriptors and province descriptors from addresses in China and Japan. For example, the
Address Validator transformation can return Chaoyang instead of Chaoyangqu and Beijing instead of
Beijingshi in Chinese addresses.
To remove the descriptors, configure the Preferred Language property and the Preferred Script property
on the transformation.
Effective in version 9.6.1 HotFix 3, you can validate Bulgaria addresses in the Cyrillic script. By default,
the Address Validator transformation returns the results in the Cyrillic script.
To receive the results in the Latin script, configure the Preferred Script property on the transformation.
Effective in version 9.6.1 HotFix 3, you can validate Slovakia addresses that contain major street name
abbreviations.
The transformation replaces the abbreviations with the names that the postal authority specifies in the
valid address output.
Ability to retrieve province ISO codes in batch, interactive, and fast completion modes
Effective in version 9.6.1 HotFix 3, the Address Validator transformation extends support for ISO 3166-2
province codes to the following countries:
• Canada
• France
• United States
For example, the transformation returns the province code NC, which identifies North Carolina, for the
following address:
15501 WESTON PKWY STE 150
CARY 27513
USA
For more information, see the Informatica 9.6.1 HotFix 3 Address Validator Port Reference and the Informatica
9.6.1 HotFix 3 Developer Transformation Guide.
Metadata Manager
This section describes new Metadata Manager features in version 9.6.1 HotFix 3.
272 Chapter 21: New Features, Changes, and Release Tasks (9.6.1 HotFix 3)
Metadata Source Versions
Effective in version 9.6.1 HotFix 3, some metadata sources have new supported versions.
• Cloudera Navigator
• ERwin
• Informix
For more information about supported metadata source versions, see the PCAE Metadata Manager XConnect
Support Product Availability Matrix on Informatica Network:
[Link]
You can configure the following properties when you create or edit a Cloudera Navigator resource:
Enables incremental loading for Cloudera Navigator resources after the first successful resource load.
When you enable this option, Metadata Manager loads recent changes to the metadata instead of
loading complete metadata.
During an incremental load, Metadata Manager extracts only the following entities:
• HDFS entities that were created or changed after the previous resource load
• All Hive tables, views, and partitions
• Operation executions that were created after the previous resource load
• All templates related to the new operation executions
Search query
Query that limits the HDFS entities that Metadata Manager extracts. By default, Metadata Manager does
not extract HDFS entities from certain directories that contain only canary files, log files, history files, or
deleted files. You can update the default search query to prevent Metadata Manager from extracting
other HDFS entities. The query that you enter must use valid Cloudera Navigator search syntax.
For more information about Cloudera Navigator resources, see the Informatica 9.6.1 HotFix 3 Metadata
Manager Administrator Guide.
For more information about extracting extended properties for Microsoft SQL Server resources, see the
Informatica 9.6.1 HotFix 3 Metadata Manager Administrator Guide.
Business Glossary
This section describes changes to Business Glossary in version 9.6.1 HotFix 3.
Previously, the export file did not have hidden worksheets and a home page.
Previously, a user who was assigned the Manage Glossaries privilege in the Analyst tool could modify the
permissions and privileges of a user for any glossary.
Glossary Import
Effective in version 9.6.1 HotFix 3, when you import a glossary that is not present in Business Glossary, the
Analyst tool creates the glossary during import. When you import a glossary, the Analyst tool automatically
populates the custom properties which are present in the glossary with values from the export file. The
Analyst tool also attaches the custom properties to the relevant templates, even if the custom properties
were not attached to any template before the import process.
Previously, if wanted to import a glossary that was not present in Business Glossary, you first needed to
create the glossary in the Analyst tool before importing the glossary contents from the export file. The
Analyst tool did not populate the custom properties with information from the export file, when they were not
attached to any template.
274 Chapter 21: New Features, Changes, and Release Tasks (9.6.1 HotFix 3)
Synonyms
Effective in version 9.6.1 HotFix 3, synonyms in business terms have the following changed behavior:
• You can remove or modify the Retirement Date that you have set for the Synonym property.
• You do not have to use the date picker to set the Create Date and Retirement Date. You can manually set
the date, but it must be in the format determined by the locale of the installation.
• You can see the Create Date of a synonym when you open a business term.
Previously, you could not remove or modify the retirement date. You could only use the date picker to set the
date. You could not view the date of creation in the business term.
Informatica Transformations
This section describes the changes to the Informatica transformations in version 9.6.1HotFix 3.
• Effective in version 9.6.1 HotFix 3, the Address Validator transformation uses version 5.7.0 of the
Informatica Address Doctor software engine. The engine enables the features that Informatica adds to the
Address Validator transformation in version 9.6.1 HotFix 3.
Previously, the transformation used version 5.6.0 of the Informatica Address Doctor software engine.
• Effective in version 9.6.1 HotFix 3, you can configure the Address Validator transformation to return the
locality information in Switzerland addresses in French, German, or Italian. To set the language, use the
Preferred Language property.
Previously, the Address Validator transformation returned all information in a Switzerland address in the
main language of the region to which the address belonged.
• Effective in version 9.6.1 HotFix 3, the Address Validator transformation returns rooftop-level geocodes
for addresses in the United Kingdom that do not include house numbers or building number.
Previously, the transformation returned rooftop-level geocodes for United Kingdom addresses that include
house numbers or building numbers.
Metadata Manager
This section describes changes to Metadata Manager in version 9.6.1 HotFix 3.
Effective in 9.6.1 HotFix 3, to load Business Glossary resources, you need the Load Resource, Manage
Resource, and View Model privileges.
Previously, to load Business Glossary resources, you needed the Load Resource and Manage Models
privileges for the Metadata Manager Service.
Effective in version 9.6.1 HotFix 3, do not run the mmcmd migrateBGLinks command after you upgrade a
business glossary from version 9.5.x. The migrateBGLinks command restores related catalog objects for
upgraded business glossaries. The command now runs automatically the first time that you load a
Business Glossary resource after upgrade.
Previously, you had to run the migrateBGLinks command as the last step in the upgrade process for
business glossaries.
Effective in version 9.6.1 HotFix 3, you cannot create related catalog objects for categories. You can still
create related catalog objects for business terms.
Previously, you could relate categories to other categories or to business glossaries in Metadata
Manager, but you could not relate categories to other metadata objects. If you did create category to
category or category to glossary relationships in Metadata Manager, Metadata Manager did not update
these relationships in the Analyst tool business glossary.
To create term to term, term to category, category to term, or category to category relationships, use the
Analyst tool.
Effective in 9.6.1 HotFix 3, Metadata Manager can load Business Glossary resources that contain
custom properties with special characters in the name. However, Metadata Manager does not extract
custom properties that contain special characters in the name.
Specifically, Metadata Manager does not extract custom properties with names that contain any of the
following special characters:
Previously, if you tried to load a Business Glossary resource that contained custom properties with any
of these characters in the name, the load failed.
276 Chapter 21: New Features, Changes, and Release Tasks (9.6.1 HotFix 3)
Permissions
Effective in version 9.6.1 HotFix 3, permissions control which resources that users can access on the Load
tab as well as the Browse tab. To perform an action on a resource, a user needs both the appropriate
privilege and the appropriate permission on the resource.
For example, to view a resource on the Load tab, a user needs the View Resource privilege and read
permission on the resource. To load a resource, a user needs the Load Resource privilege and write
permission on the resource. To edit a resource, a user needs the Manage Resource privilege and write
permission on the resource.
Because of this change, the resources that a user sees on the Load tab match the resources that the user
sees on the Browse tab. The user no longer sees all resources on the Load tab unless the user has at least
read privilege on all resources.
Previously, permissions determined which resources and metadata objects that users could access on the
Browse tab, but they did not affect the Load tab. Permissions for the Browse tab are not changed.
Previously, when you restarted the domain, you had to recycle the Metadata Manager Service to enable the
View Reports button.
Security
This section describes changes to security in version 9.6.1 HotFix 3.
Effective in version 9.6.1 HotFix 3, Informatica dropped support for SSL keys that use fewer than 512 bits if
they use RSA encryption. This change affects secure communication within the Informatica domain and
secure connections to web application services.
If your SSL keys are affected by this change, you must generate new RSA encryption based SSL keys with
more than 512 bits or use an alternative encryption algorithm. Then, use the new keys to create the files
required for secure communication within the domain or for secure connections to web application services.
For more information about the files required for secure communication within the Informatica domain or
secure connections, see the Informatica Security Guide.
Previously, Informatica supported RSA encryption based SSL keys that use fewer than 512 bits.
Metadata Manager
This section describes release tasks for Metadata Manager in version 9.6.1 HotFix 3.
After you upgrade to or apply 9.6.1 HotFix 3, you must verify permissions for each user that has privileges in
the Load privilege group. If a user does not have the appropriate permissions on a resource, the user cannot
view, load, or manage the resource.
The following table lists the privileges and permissions required to manage an instance of a resource in the
Metadata Manager warehouse:
Load Resource View Resource Write User is able to perform the following actions:
- Load metadata for a resource into the Metadata
Manager warehouse.*
- Create links between objects in connected resources
for data lineage.
- Configure search indexing for resources.
- Import resource configurations.
Manage Schedules View Resource Write User is able to perform the following actions:
- Create and edit schedules.
- Add schedules to resources.
Purge Metadata View Resource Write User is able to remove metadata for a resource from the
Metadata Manager warehouse.
Manage Resource - Purge Metadata Write User is able to create, edit, and delete resources.
- View Resource
* To load metadata for Business Glossary resources, the Load Resource, Manage Resource, and View Model privileges
are required.
Configure permissions on the Security tab of the Metadata Manager application. For more information about
configuring permissions, see the Informatica 9.6.1 HotFix 3 Metadata Manager Administrator Guide.
278 Chapter 21: New Features, Changes, and Release Tasks (9.6.1 HotFix 3)
Chapter 22
Big Data
This section describes new big data features in version 9.6.1 HotFix 2.
Informatica Analyst
Big Data Edition has the following new features and enhancements for the Analyst tool:
Effective in version 9.6.1 HotFix 2, you can enable the Analyst tool to communicate with a Hadoop
cluster on a specific Hadoop distribution. You must configure the JVM Command Line Options for the
Analyst Service.
For more information, see the Informatica 9.6.1 HotFix 2 Application Services Guide.
Effective in version 9.6.1 HotFix 2, you can use the Analyst tool to connect to Hive or HDFS sources and
targets.
For more information, see the Informatica 9.6.1 HotFix 2 Analyst User Guide.
Data Warehousing
Big Data Edition has the following new features and enhancements for data warehousing:
279
Binary Data Type
Effective in version 9.6.1 HotFix 2, a mapping in the Hive environment can process expression functions
that use binary data.
For more information, see the Informatica 9.6.1 HotFix 2 Big Data Edition User Guide.
Effective in version 9.6.1 HotFix 2, PowerExchange for Hive supports the Timestamp and Date data
types.
For more information, see the Informatica 9.6.1 HotFix 2 Big Data Edition User Guide.
File Format
Effective in version 9.6.1 HotFix 2, you can use the Data Processor transformation to read Parquet input
or output.
Apache Parquet is a columnar storage format that can be processed in a Hadoop environment. Parquet
is implemented to address complex nested data structures, and uses a record shredding and assembly
algorithm.
For more information, see the Informatica 9.6.1 HotFix 2 Data Transformation User Guide.
Data Lineage
Effective in version 9.6.1 HotFix 2, you can perform data lineage analysis on big data sources and targets.
You can create a Cloudera Navigator resource to extract metadata for big data sources and targets and
perform data lineage analysis on the metadata.
For more information, see the Informatica 9.6.1 HotFix 2 Metadata Manager Administrator Guide.
Hadoop Ecosystem
Big Data Edition has the following new features and enhancements for the Hadoop ecosystem:
Hadoop Distributions
Effective in version 9.6.1 HotFix 2, Big Data Edition added support for the following Hadoop distributions:
Big Data Edition dropped support for the following Hadoop distributions:
For more information, see the Informatica 9.6.1 HotFix 2 Big Data Edition Installation and Configuration
Guide.
Effective in version 9.6.1 HotFix 2, Big Data Edition supports Cloudera CDH clusters on Amazon EC2.
280 Chapter 22: New Features, Changes, and Release Tasks (9.6.1 HotFix 2)
Kerberos Authentication
Effective in version 9.6.1 HotFix 2, you can configure user impersonation for the native environment.
Configure user impersonation to enable different users to run mappings or connect to big data sources
and targets that use Kerberos authentication.
For more information, see the Informatica 9.6.1 Big Data Edition User Guide.
Performance Optimization
Big Data Edition has the following new features for performance optimization:
Effective in version 9.6.1 HotFix 2, you can enable data compression on temporary staging tables to
optimize performance when you run a mapping in the Hive environment. When you enable data
compression on temporary staging tables, mapping performance might increase.
To enable data compression on temporary staging tables, you must configure the Hive connection to use
the codec class name that the Hadoop cluster uses. You must also configure the Hadoop cluster to
enable compression on temporary staging tables.
For more information, see the Informatica 9.6.1 HotFix 2 Big Data Edition User Guide.
Parallel sort
Effective in version 9.6.1 HotFix 2, when you use a Sorter transformation in a mapping, the Data
Integration Service enables parallel sorting by default when it pushes the mapping logic to the Hadoop
cluster.
For more information, see the Informatica 9.6.1 HotFix 2 Big Data Edition User Guide.
For more information, see the Informatica 9.6.1 HotFix 2 Big Data Edition User Guide.
Business Glossary
This section describes new Business Glossary features in version 9.6.1 HotFix 2.
Refresh Asset
Effective in version 9.6.1 HotFix 2, you can refresh an asset in the Glossary workspace. Refresh the asset
to view updates to the properties that content managers made after you opened the asset.
For more information, see the Informatica 9.6.1 HotFix 2 Business Glossary Guide.
Effective in version 9.6.1 HotFix 2, the Analyst tool displays an alert when you try to create an asset with
a name that already exists in the glossary. You can ignore the alert and create the asset with a duplicate
name.
For more information, see the Informatica 9.6.1 HotFix 2 Business Glossary Guide.
Effective in version 9.6.1 HotFix 2, you can use an LDAP domain when you configure server settings to
enable the Business Glossary Desktop client to reference the business glossary on a machine that hosts
the Analyst Service.
isp Command
Effective in version 9.6.1 HotFix 2, the following table describes an updated isp command:
Command Description
Effective in version 9.6.1 HotFix 2, Informatica updates the reference data sets that the accelerator rules
use to analyze and enhance data.
For more information, see the Informatica Data Quality 9.6.1 HotFix 2 Accelerator Guide.
Informatica Developer
This section describes new Informatica Developer features in version 9.6.1 HotFix 2.
Effective in version 9.6.1 HotFix 2, Informatica Developer supports the Microsoft SQL Server Datetime2
data type. The Datetime2 data type can store a range of values from Jan 1, 0001 A.D. [Link] to Dec 31,
9999 A.D. [Link].9999999.
Informatica Domain
This section describes new Informatica domain features in version 9.6.1 HotFix 2.
Effective in version 9.6.1 HotFix 2, you can setup and launch Informatica services with multiple nodes on
Amazon EC2. You can launch an Informatica domain that contains up to four nodes.
282 Chapter 22: New Features, Changes, and Release Tasks (9.6.1 HotFix 2)
Informatica DiscoveryIQ
Effective in version 9.6.1 HotFix 2, Informatica DiscoveryIQ, a product usage tool, sends routine reports
on data usage and system statistics to Informatica. Data collection and upload is enabled by default.
You can choose to not send any usage statistics to Informatica.
Informatica Transformations
This section describes new Informatica transformation features in version 9.6.1 HotFix 2.
Effective in version 9.6.1 HotFix 2, you can use the Address Validator transformation to validate Taiwan
addresses in the Mandarin Traditional Chinese script. You can use ports from the Discrete or Multiline
group to define the input address.
To enter a Mandarin Traditional Chinese address on single line, use the Formatted Address Line 1 port.
Effective in version 9.6.1 HotFix 2, the Address Validator transformation returns the county name when
the address contains a valid ZIP code and locality. The transformation can add the county name
regardless of an Ix match status for the address. The transformation adds the name to a Province
output port. If the state identifier is absent from the address, the transformation adds the state identifier
to a Province port.
When you validate an address that contains hyphenated house numbers, the transformation moves the
second part of the house number to a Sub-building port.
Effective in version 9.6.1 HotFix 2, you can configure the Address Validator transformation to specify the
output format for the following elements:
• Street, building, and sub-building descriptors in Australia and New Zealand addresses
• Street descriptors in German addresses.
By default, the transformation returns the descriptor that the reference database specifies for the
address. To specify the output format for the descriptors, configure the Global Preferred Descriptor
property on the transformation.
Effective in version 9.6.1 HotFix 2, you can return the address key for a United Kingdom address. The
address key is an eight-digit numeric code identifies the address in the Postcode Address File from the
Royal Mail. To add the address key to an address, select the Address Key port. To return the address key,
the transformation reads supplementary reference data for the United Kingdom.
Effective in version 9.6.1 HotFix 2, the Address Validator transformation can validate Ban or block
information in a Japan address. The Address Validator transformation writes the data to the Street
Name 2 port or an equivalent port for dependent street data.
Effective in version 9.6.1 HotFix 2, you can configure the Address Validator transformation to add the
Gaiku code to a Japanese address. To add the code to the address, select the Gaiku Code port.
You can combine the current Choumei Aza code and the Gaiku code in a single string and return the
address that the codes identify. To return the complete address, select the Choumei Aza and Gaiku Code
JP port and configure the transformation to run in address code lookup mode.
The Japanese reference data contains the Gaiku code, the current Choumei Aza code, and any earlier
version of the Choumei Aza code for the address. When you set the Matching Extended Archive property
to ON, the transformation writes all of the codes to the output address.
Effective in version 9.6.1 HotFix 2, the Address Validator transformation supports the seven-digit postal
codes that Israel Post defines for addresses in Israel. The seven-digit postal codes replace the five-digit
postal codes that Israel post previously defined. For example, the seven-digit postal code for Nazareth in
Israel is 1623726. Previously, the postal code for Nazareth was 16237.
Effective in version 9.6.1 HotFix 2, the Address Validator transformation recognizes keywords, such as
Zimmer and App, in the Street Number ports for addresses from Germany, Austria, and Switzerland. The
Address Validator transformation writes the keywords to sub-building ports in the output address.
Effective in version 9.6.1 HotFix 2, you can configure the Address Validator transformation to add the
IRIS code to an address in France. To add the code to the address, select the INSEE-9 Code output port.
An IRIS code uniquely identifies a statistical unit in a commune in France. INSEE, or the National Institute
for Statistics and Economic Research in France, defines the codes. France has approximately 16,000
IRIS units.
Effective in version 9.6.1 HotFix 2, you can configure the Address Validator transformation to return
rooftop-level geocodes for United Kingdom addresses. Rooftop geocodes identify the center of the
primary building on a site or a parcel of land.
To generate the rooftop geocodes, set the Geocode Data Type property on the transformation to Arrival
Point. You must also install the Arrival Point reference data for the United Kingdom.
Effective in version 9.6.1 HotFix 2, Informatica updates the address reference data for Spain. The
Address Validator transformation can use the address reference data to validate sub-building-level
information in Spanish addresses.
Effective in version 9.6.1 HotFix 2, Informatica updates the address reference data for Turkey.
The Address Validator transformation can also perform the following operations when it validates
Turkish addresses:
• The transformation can identify a building name and a street name on the Delivery Address Line 1
port.
284 Chapter 22: New Features, Changes, and Release Tasks (9.6.1 HotFix 2)
• The transformation adds a slash symbol (/) between a building element and a sub-building element
when the sub-building element is a number.
Effective in version 9.6.1 HotFix 2, Informatica adds the following improvements to address validation
for addresses in Brazil:
• The Address Validator transformation can add a third level of sub-building information to the Delivery
Address Line and Formatted Address Line ports. The Brazil address system contains three levels of
sub-building information.
• The Address Validator transformation validates kilometer information on the Street Additional Info
port.
Note: The Address Validator transformation uses a comma, and not a decimal point, in kilometer
information for Brazil.
For more information, see the Informatica 9.6.1 HotFix 2 Address Validator Port Reference and the Informatica
9.6.1 HotFix 2 Developer Transformation Guide.
RunMapplet
The RunMapplet action calls and runs a mapplet as part of a Data Processor transformation. The output
of RunMapplet is read into the data holder specified in the RunMapplet action. Use the RunMapplet
action to perform tasks such as data masking, data quality, data lookup, and other activities usually
related to relational transformations.
You can use the Validation Rules editor to create user-defined rules that validate XML data. If the data
violates the rules, the action generates an XML validation report.
Use the New Transformation wizard to create a Data Processor transformation with Parquet input or
output.
For more information, see the Informatica 9.6.1 HotFix 2 Data Transformation User Guide.
Metadata Manager
This section describes new Metadata Manager features in version 9.6.1 HotFix 2.
For more information about creating and configuring Cloudera Navigator resources, see the Informatica 9.6.1
HotFix 2 Metadata Manager Administrator Guide.
For more information about creating and configuring Microsoft SQL Server Integration Services resources,
see the Informatica 9.6.1 HotFix 2 Metadata Manager Administrator Guide.
For more information about supported metadata source versions, see the PCAE Metadata Manager XConnect
Support Product Availability Matrix on Informatica Network:
[Link]
For more information about configuring Embarcadero ERStudio resources, see the Informatica 9.6.1 HotFix 2
Metadata Manager Administrator Guide.
PowerCenter Resources
Effective in version 9.6.1 HotFix 2, you can create and load a PowerCenter resource when the PowerCenter
repository database type is IBM DB2 for LUW and the database user name differs from the schema name. To
specify a schema name that differs from the database user name, enter the schema name in the Schema
Name property when you configure the PowerCenter resource.
For more information about configuring PowerCenter resources, see the Informatica 9.6.1 HotFix 2 Metadata
Manager Administrator Guide.
For more information about viewing the impact summary, see the Informatica 9.6.1 HotFix 2 Metadata
Manager User Guide.
PowerCenter
This section describes new PowerCenter features in version 9.6.1 HotFix 2.
286 Chapter 22: New Features, Changes, and Release Tasks (9.6.1 HotFix 2)
PowerCenter Upgrade
Effective in version 9.6.1 HotFix 2, PowerCenter preserves the [Link] file when you upgrade from a hotfix
or a base release of the same version. The upgrade operation preserves an [Link] file in the server/bin
directory and creates an empty configuration file named [Link] in the same directory.
When you upgrade from an earlier PowerCenter version, the upgrade operation writes an empty [Link] file
to the server/bin directory. The upgrade operation creates a backup copy of any [Link] file that it finds in
the directory.
For more information, see the Informatica 9.6.1 HotFix 2 Upgrade Guides.
PowerExchange
This section describes new PowerExchange features in version 9.6.1 HotFix 2.
The infacmd pwx CreateLoggerService and infacmd pwx UpdateLoggerService commands can now include
the following optional startup parameter in the -StartParameters option:
encryptepwd=encryption_password
A password in encrypted format that enables the encryption of PowerExchange Logger log files. When
this password is specified, the PowerExchange Logger can generate a unique encryption key for each
Logger log file. The password is stored in the CDCT file in encrypted format. The password is not stored
in CDCT backup files and is not displayed in CDCT reports that you generate with the PowerExchange
PWXUCDCT utility. To use this encryption password, you must also specify coldstart=Y in the
-StartParameters option.
For more information, see the Informatica 9.6.1 HotFix 2 Command Reference.
To enable log-file encryption for a PowerExchange Logger Service, specify an encryption password in the
startup parameters for a cold start of the PowerExchange Logger Service. You enter the encryption password
in one of the following ways:
• In the infacmd pwx CreateListenerService or infacmd pwx UpdateListenerService command, add the
encryptepwd parameter in the -StartParameters option.
• In the Informatica Administrator, edit the PowerExchange Logger Service configuration properties. In the
Start Parameters property, add the encryptepwd parameter.
Note: The PowerExchange Logger uses AES encryption algorithms. You can set the type of AES algorithm in
the ENCRYPTOPT statement of the PowerExchange Logger configuration file.
PowerExchange Adapters
This section describes new PowerExchange adapter features in version 9.6.1 HotFix 2.
For more information, see the Informatica PowerExchange for Cassandra 9.6.1 HotFix 2 User Guide.
For more information, see the Informatica PowerExchange for LinkedIn 9.6.1 HotFix 2 User Guide.
• You can use the user-defined functions in Informatica to transform the Binary data type in a Hive
environment.
• PowerExchange for Hive processes sources and targets that contain the Timestamp data type. The
Timestamp data type format is YYYY-MM-DD HH:MM:[Link]. The Timestamp data type has a
precision of 29 and a scale of 9.
• PowerExchange for Hive processes sources and targets that contain the Date data type. The Date data
type has a range of 0000-01-01 to 9999-12-31. The format is YYYY-MM-DD. The Date data type has a
precision of 10 and a scale of 0.
For more information, see the Informatica PowerExchange for Hive 9.6.1 HotFix 2 User Guide.
For more information, see the Informatica PowerExchange for MongoDB 9.6.1 HotFix 2 User Guide.
• You can configure PowerExchange for Salesforce to capture changed data from a Salesforce object that
is replicateable and contains the CreatedDate and SysModstamp fields.
288 Chapter 22: New Features, Changes, and Release Tasks (9.6.1 HotFix 2)
• You can use PowerExchange for Salesforce to connect to Salesforce API v30 and v31.
• The Data Integration Service can push Filter transformation logic to Salesforce sources.
For more information, see the Informatica PowerExchange for Salesforce 9.6.1 HotFix 2 User Guide.
For more information, see the Informatica PowerExchange for Cassandra 9.6.1 HotFix 2 User Guide for
PowerCenter.
For more information, see the Informatica PowerExchange for MongoDB 9.6.1 HotFix 2 User Guide for
PowerCenter.
For more information, see the Informatica PowerExchange for Salesforce Analytics 9.6.1 HotFix 2 User Guide
for PowerCenter.
For more information, see the Informatica PowerExchange for Vertica 9.6.1 HotFix 2 User Guide for
PowerCenter.
For more information, see the Informatica PowerCenter 9.6.1 HotFix 2 Advanced Workflow Guide.
To insert arrays of data into a Teradata target by using an ODBC connection, configure the
OptimizeTeradataWrite custom property at the session level or at the PowerCenter Integration Service level
and set its value to 1.
For more information, see the Informatica PowerCenter 9.6.1 HotFix 2 Workflow Basics Guide.
Connectivity
This section describes changes to connectivity in version 9.6.1 HotFix 2.
• Block factor
• Block size
If you upgrade to version 9.6.1 HotFix 2 and want to use the block factor and block size connection attributes
while connecting to a Sybase IQ database version that is earlier than 16.0, configure the
SybaseIQPre16VersionSupport custom property and set its value to Yes.
290 Chapter 22: New Features, Changes, and Release Tasks (9.6.1 HotFix 2)
Informatica Analyst
The following changes apply to Informatica Analyst:
• Effective in 9.6.1 HotFix 2, the Analyst tool displays the full name of the user who owns or most recently
updated a Model repository object. The full name appears in any location that identifies the user, for
example in the asset details in the library workspace.
Previously, the Analyst tool displayed the login name of the user in the library workspace and in other
locations.
To view the full name, the login name, and any email address stored for the user, place the cursor on the
full name.
• Effective in 9.6.1 HotFix 2, you can select the full name of the user in filter operations in the Analyst tool.
Previously, you selected the login name of the user in filter operations in the Analyst tool.
Informatica Transformations
This section describes changes to Informatica transformations in version 9.6.1 HotFix 2.
• Effective in version 9.6.1 HotFix 2, the Address Validator transformation uses version 5.6.0 of the
Informatica Address Doctor software engine. The engine enables the new features that you can use in the
Address Validator transformation in version 9.6.1 HotFix 2.
Previously, the transformation used version 5.5.0 of the Informatica Address Doctor software engine.
• Effective in version 9.6.1 HotFix 2, the Address Validator transformation can return county information
and sub-building information when you validate United States address data in suggestion list mode. The
transformation returns the county information on a Province 2 port. The transformation returns the sub-
building information on a sub-building port.
The transformation continues to return county information and sub-building information when you
validate the address data in batch mode, certified mode, and interactive mode.
Previously, the transformation did not return the information for United States address data in suggestion
list mode.
• Effective in version 9.6.1 HotFix 2, the National Institute of Statistics and Economic Studies Code port
name changes to INSEE 9-Code. You do not need to update the configuration of an Address Validator
transformation that uses the National Institute of Statistics and Economic Studies Code port.
• Effective in version 9.6.1 HotFix 2, all Locality Complete ports, Locality Name ports, and Locality Preferred
Name ports have a precision of 100.
Previously, the ports had a precision of 50.
To further increase performance for XML input, you can clear the Normalize XML Input setting in the Settings
tab when XML input is already normalized.
Metadata Manager
This section describes changes to Metadata Manager in version 9.6.1 HotFix 2.
• When you load a business glossary resource, Metadata Manager extracts published business terms in
unpublished categories. Previously, Metadata Manager did not extract a published business term when
the category to which the term belongs was unpublished.
• Metadata Manager no longer displays audit trail information for business terms and categories. To view
audit trail information for business terms or categories, view the object history in the Analyst tool.
restoreRepository
Restores Metadata Manager repository contents from a back-up file. You can restore repository contents
to an empty repository. Previously, you had to create repository contents before you could run this
command. The options for this command are not changed.
createRepository
Creates the Metadata Manager warehouse tables and imports models for metadata sources into the
Metadata Manager repository. You must enable the Metadata Manager Service before you can run this
command.
You can run this command from an mmRepoCmd instance that is installed with the Informatica services,
Informatica client, or Informatica utilities. Previously, you could run this command from an mmRepoCmd
instance that was installed with the Informatica services.
The options for this command are changed. You enter command options for the Metadata Manager user
instead of for the domain user. Also, you no longer have to enter command options for the PowerCenter
repository. The Metadata Manager Service process restores the PowerCenter repository content when
you start the Metadata Manager service.
292 Chapter 22: New Features, Changes, and Release Tasks (9.6.1 HotFix 2)
The following table describes new command options:
Option Description
-url Host name and port number of the Metadata Manager Service that runs the Metadata
Manager application.
--encryptedPassword Encrypted password flag for the Metadata Manager user password.
--namespace Name of the security domain to which the Metadata Manager user belongs.
Option Description
--securityDomain Name of the security domain to which the Informatica domain user belongs.
-pcRepositoryName Name of the PowerCenter repository that contains the metadata objects used to load
metadata into the Metadata Manager warehouse.
-pcRepositoryUser User account for the PowerCenter repository. Use the repository user account you
configured for the Repository Service.
-pcRepositoryNamespace Name of the security domain to which the PowerCenter repository user belongs.
-restorePCRepository Restore the repository back-up file for the PowerCenter repository to create the
objects used by Metadata Manager in the PowerCenter repository database.
Option Description
--keyTab This option specifies the path and file name of the keytab file for the Metadata Manager user instead
of for the domain user.
deleteRepository
Deletes Metadata Manager repository content, including all metadata and repository database tables.
You can run this command from an mmRepoCmd instance that is installed with the Informatica services,
Informatica client, or Informatica utilities. Previously, you could run this command from an mmRepoCmd
instance that was installed with the Informatica services.
The options for this command are changed. You enter command options for the Metadata Manager user
instead of for the domain user.
Option Description
-url Host name and port number of the Metadata Manager Service that runs the Metadata
Manager application.
--encryptedPassword Encrypted password flag for the Metadata Manager user password.
--namespace Name of the security domain to which the Metadata Manager user belongs.
Option Description
--securityDomain Name of the security domain to which the Informatica domain user belongs.
Option Description
--keyTab This option specifies the path and file name of the keytab file for the Metadata Manager user instead
of for the domain user.
restorePCRepository
Restores a PowerCenter repository back-up file that contains Metadata Manager objects to the
PowerCenter repository database. You must run this command from an mmRepoCmd instance that is
installed with the Informatica services. The options for this command are not changed.
To create or restore the Metadata Manager repository, you must belong to the default Administrator group.
Previously, you needed the Manage Services privilege with permission on the Metadata Manager Service.
294 Chapter 22: New Features, Changes, and Release Tasks (9.6.1 HotFix 2)
PowerExchange Adapters
This section describes changes to PowerExchange Adapters in version 9.6.1 HotFix 2.
• When you push the DATE_DIFF function to Vertica, Vertica rounds the date difference value to the nearest
integer. However, the PowerCenter Integration Service returns a float value. If you want the date
difference to be treated as a float value in the Vertica database, you can disable pushdown optimization.
• When you specify the format as Y and push the DATE_DIFF function to Vertica, Vertica calculates the
difference in the dates in terms of number of days. However, the PowerCenter Integration Service
calculates the difference in terms of number of years. If you want the difference value to be treated in
terms of number of years, you can disable pushdown optimization.
Metadata Manager
This section describes release tasks for Metadata Manager in version 9.6.1 HotFix 2.
To display the new class and icon, reload any Informatica Platform resource that includes HDFS data objects.
Big Data
This section describes new big data features in version 9.6.1 HotFix 1.
Data Warehousing
Big Data Edition has the following new features and enhancements for data warehousing:
Binary Data Type
Effective in version 9.6.1 HotFix 1, a mapping in the Hive environment can process binary data when it
passes through the ports in a mapping. However, the mapping cannot process expression functions that
use binary data.
For more information, see the Informatica 9.6.1 HotFix 1 Big Data Edition User Guide.
Effective in version 9.6.1 HotFix 1, the Data Integration Service can truncate the partition in the Hive
target. You must choose to both truncate the partition in the Hive target and truncate the target table.
For more information, see the Informatica 9.6.1 HotFix 1 Big Data Edition User Guide.
Hadoop Distributions
Effective in version 9.6.1 HotFix 1, Big Data Edition added support for the following Hadoop distributions:
296
For more information, see the Informatica 9.6.1 HotFix 1 Big Data Edition Installation and Configuration Guide.
Hadoop Ecosystem
Big Data Edition has the following new features and enhancements for the Hadoop ecosystem:
Cloudera Manager
Effective in version 9.6.1 HotFix 1, you can use Cloudera Manager to distribute the Big Data Edition
installation as parcels across the Hadoop cluster nodes for Cloudera CDH 5.1.
For more information, see the Informatica 9.6.1 HotFix 1 Big Data Edition Installation and Configuration
Guide.
High Availability
Effective in version 9.6.1 HotFix 1, you can enable the Data Integration Service and the Developer tool to
read from and write to a highly available Hadoop cluster. A highly available Hadoop cluster can provide
uninterrupted access to the JobTracker, NameNode, and ResourceManager in the cluster. You must
configure the Developer tool to communicate with a highly available Hadoop cluster on a Hadoop
distribution.
For more information, see the Informatica 9.6.1 HotFix 1 Big Data Edition Installation and Configuration
Guide.
Kerberos Authentication
Effective in version 9.6.1 HotFix 1, you can configure the Informatica domain that uses Kerberos
authentication to run mappings in a Hadoop cluster that also uses Kerberos authentication. You must
configure a one-way cross-realm trust to enable the Hadoop cluster to communicate with the
Informatica domain.
Previously, you could run mappings in a Hadoop cluster that used Kerberos authentication if the
Informatica domain did not use Kerberos authentication.
For more information, see the Informatica 9.6.1 HotFix 1 Big Data Edition User Guide.
Schedulers
Effective in version 9.6.1 HotFix 1, the following schedulers are valid for Hadoop distributions:
• Capacity scheduler
• Fair scheduler
For more information, see the Informatica 9.6.1 HotFix 1 Big Data Edition Installation and Configuration
Guide.
Business Glossary
This section describes new Business Glossary features in version 9.6.1 HotFix 1.
Effective in version 9.6.1 HotFix 1, you can export the relationship view diagram after you open it. Export
the relationship view diagram to access the diagram when you are not logged in to the Analyst tool or to
share the diagram with users who cannot access Business Glossary.
For more information, see the Informatica 9.6.1 HotFix 1 Business Glossary Guide.
Effective in version 9.6.1 HotFix 1, you can view multi-valued attributes in Business Glossary Desktop.
Previously, you could only view single-valued attributes. Properties such as Contains and See Also are
examples of multi-valued attributes.
pmrep Command
Effective in version 9.6.1 HotFix 1, the following table describes an updated pmrep command:
Command Description
isp Commands
Effective in version 9.6.1 HotFix 1, the following table describes new isp commands:
Command Description
migrateUsers Migrates the groups, roles, privileges and permissions of users in a native security domain
to users in one or more LDAP security domains. Requires a user migration file.
Connectivity
This section describes new connectivity features in version 9.6.1 HotFix 1.
Netezza Connectivity
Effective in version 9.6.1 HotFix 1, you can use ODBC to read data from and write data to a Netezza
database.
For more information, see the Informatica 9.6.1 HotFix 1 Developer Tool Guide.
rule_GTIN_Validation
Validates a Global Trade Item Number (GTIN). The rule validates eight-dight, twelve-digit, thirteen-digit,
and fourteen-digit numbers. The rule returns "Valid" if the check digit is correct for the number and
"Invalid" if the check digit is incorrect.
298 Chapter 23: New Features, Changes, and Release Tasks (9.6.1 HotFix 1)
Find the rule in the General_Data_Cleansing folder of the accelerator project in the Model repository.
For more information, see the Informatica 9.6.1 HotFix 1 Accelerator Guide.
Matching Rules
Effective in version 9.6.1 HotFix 1, all Data Quality accelerator rules that perform match analysis contain a
pass-through input port and a pass-through output port. Use the ports to pass unique identifiers through a
rule.
Find the rules in the Matching_Deduplication folder of the accelerator project in the Model repository.
For more information, see the Informatica 9.6.1 HotFix 1 Accelerator Guide.
Documentation
This section describes new or updated guides included with the Informatica documentation in version 9.6.1
HotFix 1.
Effective in version 9.6.1 HotFix 1, a new version of the guide contains URLs and parameters of the
Business Glossary REST APIs used to develop a client application.
Informatica Developer
This section describes new Informatica Developer features in version 9.6.1 HotFix 1.
For more information, see the Informatica 9.6.1 HotFix 1 Developer Tool Guide.
For more information, see the Informatica 9.6.1 HotFix 1 Mapping Guide.
Informatica Domain
This section describes new Informatica domain features in version 9.6.1 HotFix 2.
Effective in version 9.6.1 HotFix 2, you can setup and launch Informatica services with multiple nodes on
Amazon EC2. You can launch an Informatica domain that contains up to four nodes.
Effective in version 9.6.1 HotFix 2, Informatica DiscoveryIQ, a product usage tool, sends routine reports
on data usage and system statistics to Informatica. Data collection and upload is enabled by default.
You can choose to not send any usage statistics to Informatica.
Informatica Transformations
This section describes new Informatica transformation features in version 9.6.1 HotFix 1.
Input Data
Output port that contains the data elements in an input address record in a structured XML format.
Result
Output port that contains data elements that represent the data in an output address in a structured XML
format.
Find the Input Data port and the Result port in the XML port group on the transformation.
For more information, see the Informatica 9.6.1 HotFix 1 Address Validator Port Reference.
Mappings
This section describes new mapping features in version 9.6.1 HotFix 1.
Informatica Mappings
Branch Pruning Optimization Method
Effective in version 9.6.1 HotFix 1, the Data Integration Service can apply the branch pruning optimization
method. When the Data Integration Service applies the branch pruning method, it removes transformations
that do not contribute any rows to the target in a mapping.
The Developer tool enables the branch pruning optimization method by default when you choose the normal
or full optimizer level. You can disable branch pruning if the optimization does not increase performance by
setting the optimizer level to minimal or none.
For more information, see the Informatica Data Services 9.6.1 HotFix 1 Performance Tuning Guide.
Constraints
Effective in version 9.6.1 HotFix 1, the Data Integration Service can read constraints from relational sources,
logical data objects, physical data objects, or virtual tables. A constraint is a conditional expression that the
values on a data row must satisfy. When the Data Integration Service reads constraints, it might drop the
rows that do not evaluate to TRUE for the data rows based on the optimization method applied.
For more information, see the Informatica 9.6.1 HotFix 1 Mapping Guide.
300 Chapter 23: New Features, Changes, and Release Tasks (9.6.1 HotFix 1)
Metadata Manager
This section describes new Metadata Manager features in verison 9.6.1 HotFix 1.
Browser Support
Effective in version 9.6.1 HotFix 1, the Metadata Manager application can run in the following web browsers:
• Create Microsoft SQL Server or Oracle resources that extract metadata from these database versions.
• Create Business Glossary, Informatica Platform, or PowerCenter resources when the Model repository or
PowerCenter repository is in either of these database versions.
• Create the Metadata Manager repository in either of these database versions.
For more information about creating resources, see the Informatica 9.6.1 HotFix 1 Metadata Manager
Administrator Guide. For more information about creating the Metadata Manager repository, see the
Informatica 9.6.1 HotFix 1 Installation and Configuration Guide.
Security Enhancements
Effective in version 9.6.1 HotFix 1, when you create or edit a PowerCenter resource, you can prevent
Metadata Manager from displaying secure JDBC parameters that are part of the JDBC URL for the
PowerCenter repository database.
For more information, see the Informatica 9.6.1 HotFix 1 Metadata Manager Administrator Guide.
PowerCenter
This section describes new PowerCenter features in version 9.6.1 HotFix 1.
PowerExchange Adapters
This section describes new PowerExchange adapter features in version 9.6.1 HotFix 1.
For more information, see the Informatica PowerExchange for Cassandra 9.6.1 HotFix 1 User Guide.
For more information, see the Informatica PowerExchange for Greenplum 9.6.1 HotFix 1 User Guide.
For more information, see the Informatica PowerExchange for HBase 9.6.1 HotFix 1 User Guide.
For more information, see the Informatica PowerExchange for HDFS 9.6.1 HotFix 1 User Guide.
For more information, see the Informatica PowerExchange for Hive 9.6.1 HotFix 1 User Guide.
For more information, see the Informatica PowerExchange for Salesforce 9.6.1 HotFix 1 User Guide.
For more information, see the Informatica PowerExchange for SAS 9.6.1 HotFix 1 User Guide.
For more information, see the Informatica PowerExchange for Tableau 9.6.1 HotFix 1 User Guide.
302 Chapter 23: New Features, Changes, and Release Tasks (9.6.1 HotFix 1)
PowerExchange Adapters for PowerCenter
This section describes new PowerCenter adapter features in version 9.6.1 HotFix 1.
For more information, see the Informatica PowerExchange for Cassandra 9.6.1 HotFix 1 User Guide for
PowerCenter.
For more information, see the Informatica PowerExchange for Greenplum 9.6.1 HotFix 1 User Guide for
PowerCenter.
For more information, see the Informatica PowerExchange for Vertica 9.6.1 HotFix 1 User Guide for
PowerCenter.
Reference Data
This section describes new reference data features in version 9.6.1 HotFix 1.
Probabilistic Models
Effective in version 9.6.1 HotFix 1, you can view the total number of reference data values that you assigned
to a label in a probabilistic model.
You can use wildcard characters to search for data values in a probabilistic model.
For more information, see the Informatica 9.6.1 HotFix 1 Reference Data Guide.
Rule Specifications
This section describes new rule specification features in version 9.6.1HotFix 1.
• Return the date and time at which the Data Integration Service runs the mapping that contains the rule
statement.
• Determine if a time stamp references a point in time before or after the Data Integration Service runs the
mapping that contains the rule statement.
• Convert a string of date and time data to a date/time data type.
For more information, see the Informatica 9.6.1 HotFix 1 Rule Specification Guide.
For more information, see the Informatica 9.6.1 HotFix 1 Rule Specification Guide.
Application Services
This section describes changes to application services in version 9.6.1 HotFix 1.
• No Pre-Load Countries
• No Pre-Load Geocoding Countries
• No Pre-Load Suggestion List Countries
• No Pre-Load Address Code Countries
The Content Management Service sets the default value for each property to ALL.
Previously, the Content Management Service did not set default values for the properties.
Note: The default properties do not affect the data output from any address validation mapping that you
created in an earlier product version.
Business Glossary
This section describes changes to Business Glossary in version 9.6.1 HotFix 1.
Informatica Transformations
This section describes changes to Informatica transformations in version 9.6.1 HotFix 1.
304 Chapter 23: New Features, Changes, and Release Tasks (9.6.1 HotFix 1)
Address Validator Transformation
The following changes apply to the Address Validator transformation in version 9.6.1 HotFix 1:
• Effective in version 9.6.1 HotFix 1, the Address Validator transformation populates additional fields in a
Software Evaluation and Recognition Program (SERP) report. The SERP report includes the following
fields:
- Processing Date
Previously, when you exported a Data Processor transformation with an XMap object, it was re-imported into
the Developer tool as a transformation with a Script object.
Metadata Manager
This section describes changes to Metadata Manager in version 9.6.1 HotFix 1.
Previously, you could extract metadata from Microsoft Analysis and Reporting Services version 9.0 (2005).
Search
Effective in version 9.6.1 HotFix 1, the behavior for customizing the list of words to ignore in searches is
changed.
• You no longer need to create the [Link] file manually. Instead, the Informatica services installer
creates a default [Link] file in the following directory:
<Informatica installation directory>\services\shared\jars\pc\classes
• You must set the UseCustomStopWords property in the [Link] file to true.
The [Link] file created by the installer contains the default list of English words to ignore in searches.
To customize the word list, update the [Link] file, enable the UseCustomStopWords property, disable
and enable the Metadata Manager Service, and then manually update the search index for all resources.
Previously, to customize the word list, you had to create the [Link] file manually, disable and enable
the Metadata Manager Service, and then manually update the search index for all resources.
PowerCenter Transformations
This section describes changes to PowerCenter transformations in version 9.6.1 HotFix 1.
Previously, you set the substitution dictionary owner name and the storage owner name in the
Transformations view on the Mapping tab in the session properties.
PowerExchange
This section describes changes to PowerExchange functionality in the Informatica domain in version 9.6.1
HotFix 1.
PowerExchange Adapters
This section describes changes to PowerExchange adapters in version 9.6.1 HotFix 1.
Previously, the name of the Informatica PowerExchange for Mongo DB ODBC driver file was
[Link].
Previously, the name of the Informatica PowerExchange for Mongo DB ODBC driver file was
[Link].
Reference Data
This section describes changes to reference data functionality in version 9.6.1 HotFix 1.
306 Chapter 23: New Features, Changes, and Release Tasks (9.6.1 HotFix 1)
Probabilistic Models
Effective in version 9.6.1 HotFix 1, the Developer tool uses version 3.4 of the Stanford Named Entity
Recognition API to compile a probabilistic model.
Previously, the Developer tool used version 1.2.6 of the API to compile a probabilistic model.
PowerExchange Adapters
This section describes release tasks for PowerExchange adapters in version 9.6.1 HotFix 1.
You can use existing mappings with the deprecated PowerExchange for Salesforce adapter. However, you
cannot update the existing mappings or connections to use the PowerExchange for Salesforce connection
listed under the Cloud connection category. You must create new mappings and connections to use the new
PowerExchange for Salesforce adapter.
For more information, see the Informatica PowerExchange for Salesforce 9.6.1 HotFix 1 User Guide.
After you upgrade to Informatica 9.6.1 HotFix 1, replace the [Link] file with the back-up copy of the
[Link] file, and change the MongoDB driver name in the [Link] file to
[Link].
For more information, see the Informatica PowerExchange for MongoDB 9.6.1 HotFix 1 User Guide.
After you upgrade to Informatica 9.6.1 HotFix 1, replace the [Link] file with the back-up copy of the
[Link] file, and change the MongoDB driver name in the [Link] file to
[Link].
For more information, see the Informatica PowerExchange for MongoDB 9.6.1 HotFix 1 User Guide for
PowerCenter.
Informatica supports Google Chrome and Microsoft Internet Explorer browsers. After you upgrade, clear the
browser caches on the machines from which you access the Informatica web client applications. The
Informatica web client applications include the Administrator tool, Analyst tool, Reporting Service, Reporting
and Dashboards Service, and Metadata Manager.
308 Chapter 23: New Features, Changes, and Release Tasks (9.6.1 HotFix 1)
Chapter 24
Application Services
This section describes new application services features in version 9.6.1.
The Content Management Service determines the preload behavior for address code lookup reference data
and interactive reference data. Use the Address Validation process properties to set the preload behavior.
309
The following table describes the preload properties for address code lookup data:
Property Description
Full Pre-Load Address Code Lists the countries for which the Data Integration Service loads all reference data into
Countries memory before address validation begins.
Partial Pre-Load Address Lists the countries for which the Data Integration Service loads address reference
Code Countries metadata and indexing structures into memory before address validation begins.
No Pre-Load Address Code Lists the countries for which the Data Integration Service loads no address reference
Countries data into memory before address validation begins.
The following table describes the preload properties for interactive reference data in addition to batch and
certified reference data:
Property Description
Full Pre-Load Lists the countries for which the Data Integration Service loads all batch, certified, and
Countries interactive reference data into memory before address validation begins.
Partial Pre-Load Lists the countries for which the Data Integration Service loads batch, certified, and
Countries interactive metadata and indexing structures into memory before address validation begins.
No Pre-Load Lists the countries for which the Data Integration Service does not load batch, certified, or
Countries interactive reference data into memory before address validation begins.
For more information, see the Informatica 9.6.1 Application Service Guide.
Big Data
This section describes new Big Data features in version 9.6.1.
If the mapping is not enabled for high precision, the Data Integration Service converts all decimal values to
double values.
If the mapping is enabled for high precision, the Data Integration Service converts decimal values with a
precision greater than 28 to double values.
For more information, see the Informatica 9.6.1 Big Data Edition User Guide.
• Enter advanced Hive or Hadoop properties to configure or override Hive or Hadoop cluster properties in
[Link] on the machine on which the Data Integration Service runs.
• Enter the user name of the user that the Data Integration Service impersonates to run mappings on the
Hadoop cluster.
User Authentication
You can enable the Data Integration Service to run mapping and workflow jobs on a Hadoop cluster that uses
Kerberos authentication. The Hadoop cluster authenticates the SPN of the Data Integration Service user
account to run mapping and workflow jobs on the Hadoop cluster. To enable another user to run jobs on the
Hadoop cluster, you can configure the SPN of the Data Integration Service user account to impersonate
another user account.
For more information, see the Informatica 9.6.1 Big Data Edition User Guide.
For more information, see the Informatica 9.6.1 Big Data Edition Installation and Configuration Guide.
Business Glossary
This section describes new Business Glossary features in version 9.6.1.
Business Initiatives
A business initiative is a container of Glossary assets that you want to collectively approve and publish
in business glossary. Use a business initiative to publish multiple business terms, categories, and
policies at the same time. The business initiative goes through the same approval process as any other
Glossary asset.
You can add default values for custom properties that you create when you customize a Glossary asset
template.
You can see a visual representation of the relationships that business terms and policies have with other
assets in business glossary. The asset relationship visualization diagram is dynamic and interactive. You
can rearrange the context of the diagram, filter the assets that display in the diagram, and change the
number of levels.
Synonym Retirement
You can set a retirement date for synonyms in business glossary. The state of the synonym changes
after the retirement date. Business glossary consumers view the state to identify the validity of the
synonym.
For more information, see the Informatica 9.6.1 Business Glossary Guide.
Environment Variables
The following table describes new environment variables that you can use with command line programs:
INFA_NODE_KEYSTORE_PASSWORD Stores the password for the infa_keystore.jks file for infasetup
commands.
INFA_NODE_TRUSTSTORE_PASSWORD Stores the password for the infa_truststore.jks file for infasetup
commands.
Command Description
Command Description
printSPNAndKeytabNames Generates the list of SPN and keytab file names for the nodes and services in the
domain.
Command Description
switchToGatewayNode The command contains an option for the database truststore file (-dbtl). Enter the path and
file name of the truststore file for the secure domain configuration repository database. The
option is required if you use a secure database for the domain configuration repository.
Command Description
rebuildDependencyGraph Rebuilds the object dependency graph so that you can view object dependencies after an
upgrade.
Command Description
infasetup Command
The following table describes a new infasetup command:
Command Description
updateKerberosConfig Changes the realm name that the Informatica domain users belong to or changes the service
realm name that the Informatica domain services belong to. This command does not change
the Kerberos configuration.
Command Description
- BackupDomain The command contains an option for the database truststore (-dbtl). Enter the path and
- DefineDomain file name of the truststore file for the secure domain repository database. The option is
- DefineGatewayNode required if you configured a secure domain repository database for the domain.
- DeleteDomain
- RestoreDomain
- updateGatewayNode
- upgradeDomainMetadata
Command Description
createRepository The --domainPassword option is required only when the domain uses Kerberos authentication
and you do not specify the --keyTab option for the domain user. Previously, this option was
always required.
deleteRepository The --domainPassword option is required only when the domain uses Kerberos authentication
and you do not specify the --keyTab option for the domain user. Previously, this option was
always required.
getResource The -includePassword option is added. You can include or exclude the resource password in
the resource configuration file. Previously, the command always included the password.
restorePCRepository The --domainPassword option is required only when the domain uses Kerberos authentication
and you do not specify the --keyTab option for the domain user. Previously, this option was
always required.
mmRepoCmd
Effective in version 9.6.1, you use the mmRepoCmd command line program to back up and restore Metadata
Manager repository database contents.
• When you restore repository contents, mmRepoCmd encrypts sensitive data in the Metadata Manager
repository with the domain encryption key.
• mmRepoCmd gets repository database connection information from the Metadata Manager Service.
When you run commands, you do not need to specify connection parameters as arguments.
mmRepoCmd contains the following commands:
Command Description
Previously, you used the backupCmdLine command line program to back up and restore Metadata Manager
repository database contents. backupCmdLine is removed.
Command Description
createConnection The command contains the kerberized_connection (-K) option. Indicates that the database you
are connecting to runs on a network that uses Kerberos authentication.
rcfmu
Effective in version 9.6.1, you can use rcfmu to migrate resource configuration files from Metadata Manager
9.1.0, 9.5.x, and 9.6.0 to the current version. rcfmu contains a new option, -smv, that specifies the original
resource configuration file version.
Previously, you used rcfmu to migrate resource configuration files from Metadata Manager 9.1.0 to 9.5.x or
9.6.0.
rmu
Effective in version 9.6.1, you can use rmu to migrate resources from Metadata Manager 9.1.0, 9.5.x, and
9.6.0 to the current version. rmu detects the original resource version.
Previously, you used rmu to migrate resources from Metadata Manager 9.1.0 to 9.5.x or 9.6.0.
Documentation
This section describes new guides included with the Informatica documentation in version 9.6.1. Some new
guides are organized based on shared functionality among multiple products and replace previous guides.
Contains information about installing Informatica Big Data Edition and configuring mappings to work
with multiple Hadoop distributions. Previously, installation was documented in the PowerCenter Big Data
Edition User Guide.
Contains information about planning the domain, preparing databases, installing Informatica services
and clients, and creating application services for all Informatica platform products. Previously,
installation was documented in guides specific to the Data Quality, Data Services, and PowerCenter
products.
Informatica Upgrading from Version 9.6.0
Contains information about upgrading all Informatica platform products from version 9.6.0 to version
9.6.1. Previously, upgrade was documented in guides specific to the Data Quality, Data Services, and
PowerCenter products.
Contains information about upgrading all Informatica platform products from version 9.5.1 to version
9.6.1. Previously, upgrade was documented in guides specific to the Data Quality, Data Services, and
PowerCenter products.
Documentation 315
Informatica Upgrading from Version 9.5.0
Contains information about upgrading all Informatica platform products from version 9.5.0 to version
9.6.1. Previously, upgrade was documented in guides specific to the Data Quality, Data Services, and
PowerCenter products.
Contains information about upgrading all Informatica platform products from version 9.1.0 to version
9.6.1. Previously, upgrade was documented in guides specific to the Data Quality, Data Services, and
PowerCenter products.
Contains important information about installation, closed enhancements, fixed limitations, and known
limitations for PowerExchange adapters for Informatica. Previously, this information was documented in
the Informatica Release Notes.
Contains important information about installation, closed enhancements, fixed limitations, and known
limitations for PowerExchange adapters for Powercenter. Previously, this information was documented
in the Informatica Release Notes.
Informatica Administrator
This section describes new Informatica Administrator features in version 9.6.1.
Informatica Developer
This section describes new Informatica Developer features in version 9.6.1.
Object Dependencies
In the Developer tool, you can view the object dependencies for an object in the Object Dependencies view to
perform an impact analysis on affected objects before you modify or delete the object.
For more information, see the Informatica 9.6.1 Developer Tool Guide.
For more information, see the Informatica Development Platform 9.6.1 Informatica Connector Toolkit
Developer Guide.
Informatica Transformations
This section describes new transformation features in version 9.6.1.
Modes
You can configure the Address Validator transformation to run in the following modes:
When you select address code lookup mode, the Data Integration Service reads an identification code
and returns the corresponding address elements from the reference data. The identification code can
refer to a locality, street, or mailbox. For example, you can enter the choumei aza code for a Japanese
address and retrieve the complete address as output.
Interactive Mode
When you select interactive mode, address validation reads a partial address and returns all addresses
from the reference data that match the input elements. Select interactive mode to add data to an
incomplete address. You can enter the partial address on a single input port.
You also can enter a partial address on a single input port when you configure the transformation to run
in suggestion list mode.
Ports
You can select the following ports for the Address Validator transformation:
Count
Output port that indicates the number of addresses in the address reference data sets that match the
data in the input address.
Count Overflow
Output port that indicates whether the reference data contains addresses that address validation does
not return to the transformation.
Gmina Code PL
Output port returns the identification code for the municipality or commune to which a Polish address
belongs.
Output port that contains a seven-digit identification code for the city or state to which a Brazilian
address belongs.
Locality Identifier DE
Input and output ports that contain the identification code for a German locality.
Input and output port that contains a seven-digit identification code for the street in a South African
address.
Input and output port that identifies the administrative regions to which a French address belongs. The
National Institute of Statistics and Economic Studies code is also called the INSEE code.
Output port that returns a unique delivery point code for a Japanese mailbox.
Input and output ports that contain an identification code for a German municipality.
Output port that contains building-level post code data for an Austrian address.
Output port that returns a street-level post code for a Serbian address.
Output port that contains a two-digit suffix for the post code of a Swiss address.
Street Identifier DE
Input and output ports that contain a street-level identification code for a German address.
Output ports that indicate if address validation can return supplementary data for an address.
The transformation includes supplementary status ports for Austria, Brazil, France, Germany, Poland,
South Africa, and Switzerland.
Output port that contains the identification code for the locality to which a Polish address belongs.
Output port that contains the identification code for the street in a Polish address.
Output port that returns a unique delivery point code for a United Kingdom mailbox.
For more information, see the Informatica 9.6.1 Address Validator Port Reference and the Informatica 9.6.1
Developer Transformation Guide.
Properties
You can configure the following advanced properties for the Address Validator transformation:
Alias Locality
The property determines whether address validation replaces a valid location alias with the official
location name.
The property determines whether address validation returns a unique delivery point code for an out-of-
date Japanese address.
For more information, see the Informatica Data Transformation 9.6.1 User Guide.
When you add a Data Processor transformation that reads Avro input to a mapping, you also add a complex
file reader to pass the Avro input to the transformation. For a mapping with a Data Processor transformation
that generates Avro output, you pass the output to a complex file writer.
You can also auto-generate a Data Processor transformation with XML input, output, or both, with the New
Transformation wizard. Use an .xsd schema file or a sample file to define the expected XML hierarchy.
For more information, see the Informatica Data Transformation 9.6.1 User Guide.
For more information, see the Informatica Data Transformation 9.6.1 User Guide.
For more information, see the Informatica Data Transformation 9.6.1 User Guide.
For more information, see the Informatica Data Transformation 9.6.1 User Guide.
Match Transformation
This section describes new features to the Match transformation that you create in the Developer tool.
You can specify whether the transformation updates a current identity index data store with index data from
a mapping data source. Use the Persistence Method option to set the update policy. Set a policy to update
the data store with any index data from the data source that the data store does not contain. Alternatively,
For more information, see the Informatica 9.6.1 Developer Transformation Guide.
SQL Transformation
This section describes new features of the SQL transformation that you create in the Developer tool.
You can use the SQL transformation to invoke stored procedures from a Sybase database.
For more information, see the Informatica 9.6.1 Developer Transformation Guide.
Installer
This section describes new Informatica platform installer features in version 9.6.1.
For more information, see the Informatica 9.6.1 Installation and Configuration Guide.
For more information, see the Informatica 9.6.1 Installation and Configuration Guide.
Mappings
This section describes new mapping features in version 9.6.1
Informatica Mappings
This section describes new features of mappings that you create in the Developer tool.
For more information, see the Informatica 9.6.1 Big Data Edition User Guide.
Glossary View
When you view a category or business term in the Glossary view, you can open the category or term in the
Analyst tool by clicking the View in Informatica Analyst toolbar icon.
For more information, see the Informatica 9.6.1 Metadata Manager User Guide.
Resource Properties
Effective in version 9.6.1, database management, JDBC, and Microstrategy resources have new resource
configuration properties.
The following table describes the new resource configuration property for database management
resources:
Property Description
Secure JDBC Parameters Secure JDBC parameters that you want to append to the JDBC connection URL.
JDBC Resources
The following table describes the new resource configuration property for JDBC resources:
Property Description
Case sensitivity Specifies the case sensitivity setting for the metadata source database. By default, the
Metadata Manager Agent uses the JDBC driver to determine whether the database is case
sensitive.
Microstrategy Resources
The following table describes the new resource configuration property for Microstrategy 7.0 - 9.x
resources:
Property Description
Import schema only Imports the schemas for the selected projects without the reports and documents. By
default, Metadata Manager imports the schemas, reports, and documents.
For more information, see the Informatica 9.6.1 Metadata Manager Administrator Guide.
Resource Versions
You can create resources of the following versions:
• Business Objects 14.1 (XI 4.1 SP2). Previously, you could create Business Objects resources up to version
14 (XI R4) SP6.
• Microstrategy 9.4.1. Previously, you could create Microstrategy resources up to version 9.3.1.
• Oracle 12c. Previously, you could create Oracle resources up to version 11g Release 2.
For information about creating resources, see the Informatica 9.6.1 Metadata Manager Administrator Guide.
For more information, see the Informatica 9.6.1 Metadata Manager Administrator Guide.
Security
Metadata Manager contains the following security enhancements:
Metadata Manager uses the encryption key for the Informatica domain to encrypt sensitive data, such as
passwords, in the Metadata Manager repository.
For more information about the encryption key for the Informatica domain, see the Informatica 9.6.1
Security Guide.
You can prevent the Administrator tool from displaying secure JDBC parameters that are part of the
Metadata Manager repository database URL. You can also prevent Metadata Manager from displaying
secure JDBC parameters that are part of the database connection URL for some database management
resources.
You can prevent Metadata Manager from displaying secure JDBC parameters for the following database
management resources:
For information about specifying secure JDBC parameters in the Metadata Manager repository database
URL, see the Informatica 9.6.1 Application Service Guide. For information about specifying secure JDBC
parameters in the database connection URL for database management resources, see the Informatica
9.6.1 Metadata Manager Administrator Guide.
To increase security for the PowerCenter repository, the Custom Metadata Configurator prompts you for
the PowerCenter repository user name and password when you generate the mappings that extract
metadata from custom metadata files.
For more information, see the Informatica 9.6.1 Metadata Manager Custom Metadata Integration Guide.
Listener Service
When you configure the domain to use Kerberos authentication, you can configure Informatica clients, the
Data Integration Service, and the PowerCenter Integration Service to find a PowerExchange Listener Service
in the domain.
To do so, include the optional service_name parameter in the NODE statement in the DBMOVER configuration
file on the client, Data Integration Service, or PowerCenter Integration Service machine.
For more information, see the Informatica 9.6.1 Application Service Guide.
Listener Service
This section describes new Listener Service features in version 9.6.1.
When you configure the domain to use Kerberos authentication, you can configure Informatica clients, the
Data Integration Service, and the PowerCenter Integration Service to find a PowerExchange Listener Service
in the domain.
To do so, include the optional service_name parameter in the NODE statement in the DBMOVER configuration
file on the client, Data Integration Service, or PowerCenter Integration Service machine.
For more information, see the Informatica 9.6.1 Application Service Guide.
Command Description
PowerExchange Adapters
This section describes new PowerExchange adapter features in version 9.6.1.
Informatica Adapters
This section describes new Informatica adapter features.
You can extract historical data from DataSift for Twitter sources.
For more information, see the Informatica PowerExchange for DataSift 9.6.1 User Guide.
PowerExchange 323
PowerExchange for Greenplum
• You can use PowerExchange for Greenplum to load large volumes of data into Greenplum tables. You
can run mappings developed in the Developer tool. You can run the mappings in native or Hive run-
time environments.
• You can also use PowerExchange for Greenplum to load data to a HAWQ database in bulk.
For more information, see the Informatica PowerExchange for Greenplum 9.6.1 User Guide.
You can extract information about a group, information about posts of a group, comments about a group
post, and comments about specific posts from LinkedIn. You can also extract a list of groups suggested
for the user and a list of groups in which the user is a member from LinkedIn.
For more information, see the Informatica PowerExchange for LinkedIn 9.6.1 User Guide.
You can use PowerExchange for HBase to read data in parallel from HBase. The Data Integration Service
creates multiple Map jobs to read data in parallel.
For more information, see the Informatica PowerExchange for HBase 9.6.1 User Guide.
You can create a Hive connection that connects to HiveServer or HiveServer2. Previously, you could
create a Hive connection that connects to HiveServer. HiveServer2 supports Kerberos authentication and
concurrent connections.
For more information, see the Informatica PowerExchange for Hive 9.6.1 User Guide.
You can use the Schema Editor to change the schema of MongoDB collections. You can also use virtual
tables for MongoDB collections that have nested columns.
For more information, see the Informatica PowerExchange for MongoDB 9.6.1 User Guide.
When you load data to a Teradata table in a Hive run-time environment, you can use the Teradata
Connector for Hadoop (TDCH) to increase performance. To use TDCH to load data, add the EnableTdch
custom property at the Data Integration Service level and set its value to true.
For more information, see the Informatica PowerExchange for Teradata Parallel Transporter API 9.6.1
User Guide.
PowerCenter Adapters
This section describes new PowerCenter adapter features.
In the session properties, you can specify the path and name of the file that contains multiple filter
conditions to query the LDAP entries.
For more information, see the Informatica PowerExchange for LDAP 9.6.1 User Guide for PowerCenter.
You can use the Schema Editor to change the schema of MongoDB collections. You can also use virtual
tables for MongoDB collections that have nested columns.
For more information, see the Informatica PowerExchange for MongoDB 9.6.1 User Guide for
PowerCenter.
• When you use bulk mode to read data from or write data to Netezza, you can override the table name
and schema name in the session properties.
• You can specify a table name prefix when you configure a session to load data to a Netezza target.
The table name prefix overrides the schema for the Netezza table.
For more information, see the Informatica PowerExchange for Netezza 9.6.1 User Guide for PowerCenter.
• You can configure a session to use the Salesforce Bulk API to read data in bulk from a Salesforce
source.
• You can dissociate a custom child object from a standard parent object.
For more information, see the Informatica PowerExchange for Salesforce [Link].1 User Guide for
PowerCenter.
• When you run a file mode session to read data from SAP through ABAP, you can configure the
FileCompressEnable custom property to enable compressed data transfer. When you compress data,
you can increase the session performance and decrease the disk storage that the staging file needs.
• The Source_For_BCI relational target in the BCI listener mapping that Informatica ships contains a
new column called DataSourceName. You can use this field to partition the data that the
Source_For_BCI relational target receives from SAP.
• Informatica ships an activation mapping along with the BCI_Mappings.xml file. You can use the
activation mapping to activate multiple DataSources in SAP simultaneously.
• When you use numeric delta pointers to extract business content data, you can extract the changed
data alone without doing a full transfer of the entire data.
For more information, see the Informatica PowerExchange for SAP NetWeaver 9.6.1 User Guide for
PowerCenter.
When you run a column profile in the Analyst tool, you can view the following visual charts in the column
profile results:
• Pie charts that represent the value frequencies and column patterns for a column.
• A bar chart that represents the percentage of rows with null values, unique values, and non-unique
values in a column.
Drill-down Filters
In the Analyst tool, you can right-click a column value in the drill-down results and add the column value
as a filter condition.
You can measure the value of data quality using scorecards in the Analyst tool. Define a cost unit for a
scorecard metric, assign a variable or fixed cost, and view the cost trend chart along with the score trend
chart. You can then monitor the value of data that you selected at the metric and scorecard levels.
Reference Data
This section describes new reference data features in version 9.6.1.
Probabilistic Models
You can perform the following tasks when you create or edit a probabilistic model in the Developer tool:
• You can assign a color to each label that you add to a probabilistic model.
• You can view the total number of labels that you assign to the data values in a row.
• You can view the total number of data values that the probabilistic model associates with a label.
For more information, see the Informatica 9.6.1 Reference Data Guide.
Rule Specifications
This section describes new rule specifications features in version 9.6.1.
You can perform the following tasks when you work with rule specifications in the Analyst tool:
• You can change the order of the rule statements in a rule set.
• You can test the operations of a single rule set.
• You can save the data that you use to test a rule set or a rule specification, and you can delete the data.
• You can specify a null value in a condition or an action in a rule statement.
• You can use data that you copy from Microsoft Excel to test a rule set or a rule specification.
For more information, see the Informatica 9.6.1 Rule Specification Guide.
HAWQ Connectivity
You can use ODBC to read data from and write data to a HAWQ database.
For more information, see the Informatica 9.6.1 Developer Tool Guide.
Informatica Developer supports the Microsoft SQL Server Uniqueidentifier data type. The
Uniqueidentifier data type has a precision of 38 and a scale of 0.
For more information, see the Informatica 9.6.1 Developer Tool Guide.
Informatica Developer supports the Oracle float data type. The float data type has a precision of 1 to 15
and a scale of 0.
For more information, see the Informatica 9.6.1 Developer Tool Guide.
Informatica Functions
This section describes new features of Informatica functions.
ANY Function
You can use the ANY function to return any row in the selected port.
For more information, see the Informatica 9.6.1 Transformation Language Reference.
Changes (9.6.1)
This chapter includes the following topics:
Big Data
This section describes changes to Big Data in version 9.6.1.
Effective in version 9.6.1, you can choose not to select a Hive version for the validation environment when
you configure a mapping to run in the Hive environment.
The Data Integration Service evaluates a valid Hive version for the Hadoop cluster and validates the mapping.
Previously, you had to select a Hive version for the validation environment.
Domain
This section describes changes to the Informatica domain in version 9.6.1.
Effective in version 9.6.1, Informatica dropped support for SUSE Linux Enterprise Server 10. If any node in the
domain is on SUSE Linux Enterprise Server 10, you must migrate the node to a supported operating system
before upgrading the node to 9.6.1. For more information, see the Informatica upgrade guides.
328
Informatica Transformations
This section describes changes to Informatica transformations in version 9.6.1.
Effective in version 9.6.1, the Address Validator transformation uses version 5.5.0 of the Address Doctor
software engine.
Previously, the transformation used version 5.4.1 of the Address Doctor software engine.
Effective in version 9.6.1, the transformation adds a two-character country code to the following port names:
Effective in version 9.6.1, you can disable the Alias Street property on the transformation. The property
determines whether address validation replaces a street alias with the official street name.
Previously, you configured the property to replace all street aliases or to replace any term that is not a valid
street alias.
Previously, a mapping that used the key masking technique would create the same masked output when run
after upgrade.
Effective in version 9.6.1, you can export a Data Processor transformation to PowerCenter with pass-through
ports or a relational to hierarchical transformation. Previously, you could only export Data Processor
transformations to PowerCenter if they did not have relational input or output.
Informatica Mappings
This section describes changes to mappings that you create in the Developer tool.
The Data Integration Service can create partitions for a mapping when the mapping contains a DB2 for
LUW target that has more database partitions than the parallelism value. If the DB2 for LUW target has
more database partitions than the parallelism value, the Data Integration Service uses all of the writer
threads defined by the parallelism value. The Data Integration Service distributes multiple database
partitions to some of the writer threads.
Previously, if the DB2 for LUW target had more database partitions than the parallelism value, the Data
Integration Service did not create partitions for the entire mapping. The Data Integration Service used
one thread to process each mapping pipeline stage.
When the maximum parallelism for a mapping is Auto, the actual parallelism value equals the minimum
of the following values:
• Maximum parallelism value set for the Data Integration Service process.
• Maximum number of partitions for all flat file, IBM DB2 for LUW, and Oracle sources in the mapping.
The Data Integration Service determines the number of partitions based on the source type. The
number of partitions for a flat file source equals the maximum parallelism value set for the Data
Integration Service process. The number of partitions for a DB2 for LUW or Oracle relational source
equals the number of database partitions in the relational source.
Previously, when the maximum parallelism for a mapping was Auto, the actual parallelism value equaled
the maximum parallelism value set for the Data Integration Service process.
Metadata Manager
This section describes changes to Metadata Manager in version 9.6.1.
Effective in version 9.6.1, when you export a resource configuration through Metadata Manager or
mmcmd, you can include or exclude the encrypted resource password in the resource configuration file.
If you exclude the password, and the resource uses a password, you must enter it when you import the
resource configuration.
Privilege Changes
Effective in version 9.6.1, you can export a resource configuration if you have the View Resource
privilege. You can import a resource configuration if you have the Load Resource privilege.
Previously, to export or import a resource configuration, you needed the Load Resource privilege.
The following table describes the deleted resource configuration properties for Microstrategy 7.0 - 9.x
resources:
Property Description
Data model reverse Optionally, transforms SQL joins of a model into foreign key relationships.
engineer joins
Dimensional model Optionally, reverse engineers the following dimensional objects into relational objects
reverse engineering when there is a direct match between the dimensional object and the relational object:
- The dimension name, description, and role to the underlying table
- The attribute or measure name, description, and datatype to the underlying column
PowerCenter Transformations
This section describes changes to PowerCenter transformations in version 9.6.1.
Previously, a mapping that used the key masking technique would create the same masked output when run
after upgrade.
PowerExchange Adapters
This section describes changes to PowerExchange adapters in version 9.6.1.
PowerExchange for Salesforce does not support the following Salesforce API versions:
• 7.0
• 8.0
• 16.0
Error Logging
The PowerCenter Integration Service writes error messages to the error log for the session.
Previously, the PowerCenter Integration Service wrote error messages to both the error log and the
session log.
For Bulk API target sessions, configure at least 10 to 50 MB of space for the Java temporary directory on
the PowerCenter Integration Service machine.
Previously, the Bulk API did not use the Java temporary directory when writing to Salesforce targets.
You can no longer import fields from objects related to the following Salesforce objects:
• ActivityHistory
• EmailStatus
• Name
• OpenActivity
• OwnedContentDocument
Previously, you could import fields from objects related to these objects.
Use the Salesforce service URL to configure connections to Salesforce. To use the latest version of the
Salesforce API, create an application connection or update the service URL in an existing application
connection.
Previously, PowerExchange for Salesforce used version 27.0 of the Salesforce API.
For sessions that read from Salesforce with the standard API, the PowerCenter Integration Service no
longer includes SOAP requests in the session log.
Effective in version 9.6.1, the total count of unique values in column profile results does not include the null
column values.
Previously, null column values were included in the total count of unique values.
Rule Specifications
This section describes changes to rule specifications in version 9.6.1.
Effective in version 9.6.1, you can use the rule statement options to specify a data value or a null value for a
condition or action.
Previously, you opened a configuration dialog box to in the rule statement to specify a data value or a null
value.
Effective in version 9.6.1, you do not need the Informatica domain access permission to perform the
following operations:
Security
This section describes changes to security in version 9.6.1.
If the domain is used for testing or development and does not require a high level of security, you can set
the service principal at the node level. You can use one SPN and keytab file for the node and all the
service processes on the node. When you create additional services on a node, you do not need to create
additional keytab files.
Process Level
If the domain is used for production and requires a high level of security, you can set the service
principal at the process level. Create a unique SPN and keytab file for each node and each process on
the node. The number of SPNs and keytab files required for each node depends on the number of service
processes that run on the node.
Previously, the Informatica domain required a unique SPN and keytab file for each node and each process on
the node.
335
Chapter 26
Version 9.6.0
This section describes new features and enhancements in version 9.6.0.
Informatica Analyst
This section describes new features and enhancements to Informatica Analyst.
• Start. Access other workspaces that you have the license to access through access panels on this
workspace. If you have the license to perform exception management, your tasks appear in this
workspace.
• Glossary. Define and describe business concepts that are important to your organization.
• Discovery. Analyze the quality of data and metadata in source systems.
• Design. Design business logic that helps analysts and developers collaborate.
• Scorecards. Open, edit, and run scorecards that you created from profile results.
• Library. Search for assets in the Model repository. You can also view metadata in the Library workspace.
• Exceptions. View and manage exception record data for a task. View duplicate record clusters or
exception records based on the type of task you are working on. View an audit trail of the changes you
make to records in a task.
• Connections. Create and manage connections to import relational data objects, preview data, run a profile,
and run mapping specifications.
• Data Domains. Create, manage, and remove data domains and data domain groups.
336
• Job Status. Monitor the status of Analyst tool jobs such as data preview for all objects and drilldown
operations on profiles.
• Projects. Create and manage folders and projects and assign permissions on projects.
• Glossary Security. Manage permissions, privileges, and roles for business glossary users.
The tasks that you can perform in the Analyst tool depend on the license for Informatica products and the
privileges to perform tasks. Based on the license that your organization has, you can use the Analyst tool to
perform the following tasks:
• Define business glossaries, terms, and policies to maintain standardized definitions of data assets in the
organization.
• Perform data discovery to find the content, quality, and structure of data sources, and monitor data quality
trends.
• Define data integration logic and collaborate on projects to accelerate project delivery.
• Define and manage rules to verify data conformance to business policies.
• Review and resolve data quality issues to find and fix data quality issues in the organization.
For more information, see the Informatica 9.6.0 Analyst Tool Guide.
Informatica Installer
This section describes new features and enhancements to the Informatica platform installer.
Authentication
You can configure the Informatica domain to use Kerberos authentication. When you install the Informatica
services, you can enable Kerberos authentication for the domain. A page titled Domain - Network
Authentication Protocol appears in the Informatica services installer. To install the domain with Kerberos
authentication, select the option to enable Kerberos authentication and enter the required parameters.
Encryption Key
Informatica encrypts sensitive data such as passwords when it stores data in the domain. Informatica uses a
keyword to generate a unique encryption key with which to encrypt sensitive data stored in the domain.
A page titled Domain - Encryption Key appears in the Informatica services installer. If you create a node and
a domain during installation, you must specify a keyword for Informatica to use to generate a unique
encryption key for the node and domain. If you create a node and join a domain, Informatica uses the same
encryption key for the new node.
For more information, see the Informatica 9.6.0 installation and upgrade guides.
For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.
Use the TOTAL_SUM column in the following relational database views to access the profiling warehouse for
information about the sum of values in numeric columns:
• IDPV_COL_PROFILE_RESULTS
• IDPV_PROFILE_RESULTS_TRENDING
For more information, see the Informatica 9.6.0 Database View Reference.
Curation
You can curate inferred profile results in both Analyst and Developer tools. Curation is the process of
validating and managing discovered metadata of a data source so that the metadata is fit for use and
reporting. You can approve, reject, and restore datatypes. You can also approve, reject, and restore data
domains, primary keys, and foreign keys. You can hide or show rows containing rejected datatypes or data
domains. You can exclude approved datatypes, data domains, and primary keys from column profile
inference and data domain discovery inference when yo run the profile again.
For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.
Use the following relational database views to access profiling warehouse for information about curated
profile results:
• IDPV_CURATED_DATATYPES
• IDPV_CURATED_DATADOMAINS
• IDPV_CURATED_PRIMARYKEYS
• IDPV_CURATED_FOREIGNKEYS
For more information, see the Informatica 9.6.0 Database View Reference.
For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.
Datatype Inference
You can infer multiple datatypes that match the inference criteria when you run a column profile. You can drill
down based on a column datatype in column profile results.
For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.
• IDPV_DATATYPES_INF_RESULTS
• IDPV_DATATYPE_FREQ_TRENDING
For more information, see the Informatica 9.6.0 Database View Reference.
Discovery Search
Discovery search finds assets and identifies relationships to other assets in the databases and schemas of
the enterprise. You can use discovery search to find where the data and metadata exists in the enterprise.
You can find physical data sources and data object relationships or you can identify the lack of documented
data object relationships. You can view the direct matches, indirect matches, and related assets from the
discovery search results.
If you perform a global search, the Analyst tool performs a text-based search for data objects, datatypes, and
folders. If you perform discovery search, in addition to the text matches, search results include objects with
relationships to the objects that match the search criteria.
For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.
Enterprise Discovery
You can perform enterprise discovery in Informatica Analyst. The enterprise discovery includes column
profile and data domain discovery.
For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.
For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.
Scorecards
You can export scorecard results to a Microsoft Excel file. The exported file contains scorecard summary,
trend charts, rows that are not valid, and scorecard properties.
For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.
For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.
Accelerators
The set of Informatica accelerators has the following additions:
• Informatica Data Quality Accelerator for Spain. Contains rules, reference tables, demonstration mappings,
and demonstration data objects that solve common data quality issues in Spanish data.
Address Validation
You can configure the following advanced properties on the Address Validator transformation:
Determines the type of address to validate. Set the property when input address records contain more
than one type of valid address data.
Imposes a practical limit on the number of suggested addresses that the transformation returns when
there are multiple valid addresses on a street. Set the property when you set the Ranges to Expand
property.
Determines how the transformation calculates geocode data for an address. Geocodes are latitude and
longitude coordinates. Set the property to return the following types of geocode data:
• The latitude and longitude coordinates of the entrance to a building or a plot of land.
• The latitude and longitude coordinates of the geographic center of a plot of land.
The transformation can also estimate the latitude and longitude coordinates for an address. Estimated
geocodes are called interpolated geocodes.
Determines the maximum number of characters on any line in the address. Set the property to verify that
the line length in an address does not exceed the requirements of the local mail carrier.
Ranges To Expand
Determines how the transformation returns suggested addresses for a street address that does not
specify a house number. Set the property to increase or decrease the range of suggested addresses for
the street.
You can configure the following address validation process property in the Administrator tool:
The location to which address validation writes a SendRight report and any log file that relates to the
creation of the report. Generate a SendRight report to verify that a set of New Zealand address records
meets the certification standards of New Zealand Post.
Note: You configure the Address Validator transformation to create a SendRight report file.
For more information, see the Informatica 9.6.0 Developer Transformation Guide.
Business Glossary
Business Glossary comprises online glossaries of business terms and policies that define important
concepts within an organization. Data stewards create and publish terms that include information such as
descriptions, relationships to other terms, and associated categories. Glossaries are stored in a central
location for easy lookup by end-users.
Business Glossary is made up of glossaries, business terms, policies, and categories. A glossary is the high-
level container that stores other glossary content. A business term defines relevant concepts within the
organization, and a policy defines the business purpose that governs practises related to the term. Business
terms and policies can be associated with categories, which are descriptive classifications. You can access
Business Glossary through Informatica Analyst (the Analyst tool).
For more information, see the Informatica 9.6.0 Business Glossary Guide.
For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.
Use the TOTAL_SUM column in the following relational database views to access the profiling warehouse for
information about the sum of values in numeric columns:
• IDPV_COL_PROFILE_RESULTS
• IDPV_PROFILE_RESULTS_TRENDING
For more information, see the Informatica 9.6.0 Database View Reference.
Curation
You can curate inferred profile results in both Analyst and Developer tools. Curation is the process of
validating and managing discovered metadata of a data source so that the metadata is fit for use and
reporting. You can approve, reject, and restore datatypes. You can also approve, reject, and restore data
domains, primary keys, and foreign keys. You can hide or show rows containing rejected datatypes or data
domains. You can exclude approved datatypes, data domains, and primary keys from column profile
inference and data domain discovery inference when yo run the profile again.
For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.
Use the following relational database views to access profiling warehouse for information about curated
profile results:
• IDPV_CURATED_DATATYPES
• IDPV_CURATED_DATADOMAINS
• IDPV_CURATED_PRIMARYKEYS
• IDPV_CURATED_FOREIGNKEYS
For more information, see the Informatica 9.6.0 Database View Reference.
Datatype Inference
You can infer multiple datatypes that match the inference criteria when you run a column profile. You can drill
down based on a column datatype in column profile results.
For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.
• IDPV_DATATYPES_INF_RESULTS
• IDPV_DATATYPE_FREQ_TRENDING
For more information, see the Informatica 9.6.0 Database View Reference.
When you configure a Match transformation to read index tables, you control the types of record that the
transformation analyzes and the types of output that the transformation generates. You can configure the
transformation to analyze all the records in the data sources or a subset of the records. You can configure
the transformation to write all records as output or a subset of the records.
For more information, see the Informatica 9.6.0 Developer Transformation Guide.
Java Transformation
In a Java transformation, you can configure an input port as a partition key, a sort key, and assign a sort
direction. The partition key and sort key are valid when you process the transformation in a mapping that
runs in a Hive environment.
For more information, see the Informatica 9.6.0 Developer Transformation Guide.
Lookup Transformation
If you cache the lookup source for a Lookup transformation, you can use a dynamic cache to update the
lookup cache based on changes to the target. The Data Integration Service updates the cache before it
passes each row to the target.
For more information, see the Informatica 9.6.0 Developer Transformation Guide.
Normalizer Transformation
The Normalizer transformation is an active transformation that transforms one source row into multiple
output rows. When a Normalizer transformation receives a row that contains repeated fields, it generates an
output row for each instance of the repeated data.
Use the Normalizer transformation when you want to organize repeated data from a relational or flat file
source before you load the data to a target.
For more information, see the Informatica 9.6.0 Developer Transformation Guide.
Performance
In the Developer tool you can enable a mapping to perform the following optimizations:
Pushdown Optimization
The Data Integration Service can push expression, aggregator, operator, union, sorter, and filter functions to
Greenplum sources when the connection type is ODBC.
Rule Builder
Rule Builder is an Informatica Analyst feature that converts business rule requirements to transformation
logic. You save the business rule requirements in a rule specification. When you compile the rule
specification, the Analyst tool creates transformations that can analyze the business data according to the
requirements that you defined. The Analyst tool saves the transformations to one or more mapplets in the
Model repository.
A rule specification contains one or more IF-THEN statements. The IF-THEN statements use logical operators
to determine if the input data satisfies the conditions that you specify. You can use AND operators to link IF
statements and verify that a data value satisfies multiple conditions concurrently. You can define statements
that compare data from different inputs and test the inputs under different mathematical conditions. You can
also link statements so that the output from one statement becomes the input to another.
Rule Builder represents a link between business users and the Informatica development environment.
Business users can log in to the Analyst tool to create mapplets. Developer tool users add the mapplets to
mappings and verify that the business data conforms to the business rules.
For more information, see the Informatica 9.6.0 Rule Builder Guide.
Scorecards
You can export scorecard results to a Microsoft Excel file. The exported file contains scorecard summary,
trend charts, rows that are not valid, and scorecard properties.
For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.
For more information, see the Informatica 9.6.0 Developer Transformation Guide.
For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.
Use the TOTAL_SUM column in the following relational database views to access the profiling warehouse for
information about the sum of values in numeric columns:
• IDPV_COL_PROFILE_RESULTS
• IDPV_PROFILE_RESULTS_TRENDING
For more information, see the Informatica 9.6.0 Database View Reference.
For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.
Use the following relational database views to access profiling warehouse for information about curated
profile results:
• IDPV_CURATED_DATATYPES
• IDPV_CURATED_DATADOMAINS
• IDPV_CURATED_PRIMARYKEYS
• IDPV_CURATED_FOREIGNKEYS
For more information, see the Informatica 9.6.0 Database View Reference.
Datatype Inference
You can infer multiple datatypes that match the inference criteria when you run a column profile. You can drill
down based on a column datatype in column profile results.
For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.
Use the following relational database views to access profiling warehouse for information on inferred
datatypes:
• IDPV_DATATYPES_INF_RESULTS
• IDPV_DATATYPE_FREQ_TRENDING
For more information, see the Informatica 9.6.0 Database View Reference.
• The Data Masking transformation is supported on Hadoop clusters. You can run the transformation in a
Hive environment.
• Tokenization is a masking technique in which you can provide JAR files with your own algorithm or logic
to mask string data.
• You can use the Phone masking technique to mask fields with numeric integer and numeric bigint
datatypes.
For more information, see the Informatica 9.6.0 Developer Transformation Guide.
Java Transformation
In a Java transformation, you can configure an input port as a partition key, a sort key, and assign a sort
direction. The Partition key and Sort key are valid when you process the transformation in a mapping that
runs in a Hive environment.
For more information, see the Informatica 9.6.0 Developer Transformation Guide.
Use the Normalizer transformation when you want to organize repeated data from a relational or flat file
source before you load the data to a target.
For more information, see the Informatica 9.6.0 Developer Transformation Guide.
Performance
In the Developer tool you can enable a mapping to perform the following optimizations:
For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.
You can create a REST Web Service Consumer transformation from a Schema object or add elements to an
empty transformation.
Scorecards
You can export scorecard results to a Microsoft Excel file. The exported file contains scorecard summary,
trend charts, rows that are not valid, and scorecard properties.
For more information, see the Informatica Data Explorer 9.6.0 Data Discovery Guide.
For more information, see the Informatica 9.6.0 Developer Transformation Guide.
Stored Procedures
You can use the SQL transformation to invoke stored procedures from a relational database. You can create
the SQL transformation in the Developer tool by importing a stored procedure. The Developer tool adds the
ports and the stored procedure call. You can manually add more stored procedure calls in the SQL
transformation. Return zero rows, one row, or result sets from the stored procedure.
For more information, see the Informatica 9.6.0 Developer Transformation Guide.
Tableau
You can query a deployed SQL data service with Tableau through the Informatica Data Services ODBC driver.
For more information, see the Informatica 9.6.0 Data Services Guide.
• The external web service provider can authenticate the Integration Service using NTLMv2.
• In a Web Service Consumer transformation, you can use WSDL with one-way message pattern.
For more information, see the Informatica 9.6.0 Developer Transformation Guide.
For more information about the wizard, see the Informatica 9.6.0 Data Transformation User Guide.
Relational Input
A Data Processor transformation can transform relational input into hierarchical output.
For more information about relational input, see the Informatica 9.6.0 Data Transformation User Guide.
For more information about XMap or JSON, see the Informatica 9.6.0 Data Transformation User Guide.
Informatica Developer
This section describes new features and enhancements to Informatica Developer.
Alerts
In the Developer tool, you can view connection status alerts in the Alerts view.
For more information, see the Informatica 9.6.0 Developer Tool Guide.
Functions
In the Developer tool, you can use the following functions in the transformation language:
For more information, see the Informatica 9.6.0 Developer Transformation Language Reference.
JDBC Connectivity
You can use the Data Integration Service to read from relational database sources and write to relational
database targets through JDBC. JDBC drivers are installed with the Informatica services and the Informatica
clients. You can also download the JDBC driver that is JDBC 3.0 compliant from third party vendor websites.
You can use the JDBC driver to import database objects, such as views and tables, preview data for a
transformation, and run mappings.
For more information, see the Informatica 9.6.0 Developer Tool Guide.
Keyboard Accessibility
In the Developer tool, you can use keyboard shortcuts to work with objects and ports in the editor. You can
also use keyboard shortcuts to navigate the Transformation palette and the workbench.
For more information, see the Informatica 9.6.0 Developer Tool Guide.
For more information, see the Informatica 9.6.0 Developer Tool Guide.
Passphrases
In the Developer tool, you can enter a passphrase instead of a password for following connection types:
• Adabas
• DB2 for i5/OS
• DB2 for z/OS
• IMS
• Sequential
• VSAM
A valid passphrase for accessing databases and data sets on z/OS can be up to 128 characters in length. A
valid passphrase for accessing i5/OS can be up to 31 characters in length. Passphrases can contain the
following characters:
Design API
Version 9.6.0 includes the following enhancements for the Design API:
• You can use the Design API to fetch an XML source or XML target from the PowerCenter repository.
• You can use Design API to connect to a hierarchical VSAM data source or target through PowerExchange.
• You can use the Design API to perform repository functions in a domain that uses Kerberos
authentication. You can enable Kerberos authentication through the [Link] file or when you
create a Repository object.
For more information, see the Informatica Development Platform 9.6.0 Developer Guide.
For more information, see the Informatica Development Platform 9.6.0 Informatica Connector Toolkit
Developer Guide.
Informatica Domain
This section describes new features and enhancements to the Informatica domain.
Analyst Service
Version 9.6.0 includes the following enhancements to the Analyst Service:
• You can select a Data Integration Service configured to run Human tasks. If the Data Integration Service
associated with the Analyst Service is not configured to run Human tasks, choose a different Data
Integration Service.
• You can select a Search Service to enable searches in the Analyst tool.
• You can set the location of the export file directory to export a business glossary.
For more information, see the Informatica 9.6.0 Application Service Guide.
For more information, see the Informatica 9.6.0 Application Service Guide.
For more information, see the Informatica 9.6.1 Application Service Guide.
High Availability
Version 9.6.0 includes the following enhancements to high availability for services:
• When the Model Repository Service becomes unavailable, the Service Manager can restart the service on
the same node or a backup node. You can configure the Model Repository Service to run on one or more
backup nodes.
• When the Data Integration Service becomes unavailable, the Service Manager can restart the service on
the same node or a backup node. You can configure the Data Integration Service to run on one or more
backup nodes.
• When the Data Integration Service fails over or restarts unexpectedly, you can enable automatic recovery
of aborted workflows.
• You can enable the PowerCenter Integration Service to store high availability persistence information in
database tables. The PowerCenter Integration Service stores the information in the associated repository
database.
For more information, see the Informatica 9.6.0 Administrator Guide.
Log Management
You can aggregate logs at the domain level or service level based on scenarios with the Administrator tool.
You can also compress the log files that you aggregate to save disk space.
Passphrases
You can enter a passphrase instead of a password at the following locations:
Search Service
Create a Search Service to enable search in the Analyst tool and Business Glossary Desktop.
Workflow Graph
You can view the graphical representation of a workflow that you run in the Administrator tool. You can view
the details of the tasks within the workflow and the failure points.
Authentication
You can run Informatica with Kerberos authentication and Microsoft Active Directory (AD) directory service.
Kerberos authentication provides single sign-on capability to Informatica domain client applications. The
Informatica domain supports Active Directory 2008 R2.
Encryption Key
You can specify a keyword to generate a unique encryption key for encrypting sensitive data such as
passwords that are stored in the domain.
Workflow Security
You can configure the PowerCenter Integration Service to run PowerCenter workflows securely. The Enable
Data Encryption option enables secure communication between the PowerCenter Integration Service and the
Data Transformation Manager (DTM) process and between DTM processes.
Administrator Group
The Informatica domain includes an Administrator group with default administrator privileges. You can add
users to or remove users from the Administrator group. You cannot delete the Administrator group.
Audit Reports
In the Administrator tool, you can generate audit reports to get information on users and groups in the
Informatica domain. For example, you can get information about a user account, such as the privileges and
permissions assigned to the user and the groups associated with the user.
Privilege Description
Workspace Access User is able to access the following workspaces in the Analyst tool:
- Design workspace.
- Discovery workspace.
- Glossary workspace.
- Scorecards workspace.
Privilege Description
Access Analyst User is able to access the Model repository from the Analyst tool.
Access Developer User is able to access the Model repository from the Developer tool.
infacmd as Commands
The following table describes an updated infacmd as command:
Command Description
UpdateServiceOptions Updates Analyst Service options. In version 9.6.0 you can run the command to specify a Data
Integration Service to run Human tasks.
For example, the following command configures the Analyst Service to specify DIS_ID_100 as
the Data Integration Service name:
infacmd as UpdateServiceOptions
-dn InfaDomain -sn AS_ID_100
-un Username -pd Password
[Link]=DS_ID_100
Command Description
CreateAuditTables Creates audit tables that contain audit trail log events for bad record tables and duplicate tables
in a staging database.
Update any script that uses infacmd as CreateAuditTables.
DeleteAuditTables Creates audit tables that contain audit trail log events for bad record tables and duplicate tables
in a staging database.
Update any script that uses infacmd as DeleteAuditTables.
Command Description
Command Description
Command Description
GetLog Contains the argument SEARCH for the ServiceType option. Use the argument to get the
log events for the Search Service.
ListServices Contains the argument SEARCH for the ServiceType option. Use the argument to get a list
of all Search Services running in the domain.
Command Description
infacmd ps Commands
The following table describes new infacmd ps commands:
Command Description
migrateProfileResults Migrates column profile results and data domain discovery results from versions 9.1.0, 9.5.0,
or 9.5.1.
synchronizeProfile Migrates documented keys, user-defined keys, committed keys, primary keys, and foreign
keys for all the profiles in a specific project from versions 9.1.0, 9.5.0, or 9.5.1.
Command Description
createdatamaps Creates PowerExchange data maps for IMS, SEQ, or VSAM data sources for bulk data movement.
Command Description
PowerCenter
This section describes new features and enhancements to PowerCenter.
For more information, see the Informatica PowerCenter 9.6.0 Advanced Workflow Guide.
Transformations
You can use a parameter file to provide cache size values in the following transformations:
• Aggregator
• Joiner
• Rank
• Sorter
For more information, see the Informatica PowerCenter 9.6.1 Transformation Guide.
For more information, see the Informatica PowerCenter Big Data Edition 9.6.0 User Guide.
Business Glossary
Business Glossary comprises online glossaries of business terms and policies that define important
concepts within an organization. Data stewards create and publish terms that include information such as
descriptions, relationships to other terms, and associated categories. Glossaries are stored in a central
location for easy lookup by end-users.
Business Glossary is made up of glossaries, business terms, policies, and categories. A glossary is the high-
level container that stores other glossary content. A business term defines relevant concepts within the
organization, and a policy defines the business purpose that governs practises related to the term. Business
terms and policies can be associated with categories, which are descriptive classifications. You can access
Business Glossary through Informatica Analyst (the Analyst tool).
For more information, see the Informatica 9.6.0 Business Glossary Guide.
Metadata Manager
This section describes new features and enhancements to Metadata Manager.
Security Enhancements
Metadata Manager contains the following security enhancements:
Metadata Manager can communicate with secure IBM DB2, Microsoft SQL Server, and Oracle databases.
Metadata Manager can communicate with these databases when they are used for the Metadata
Manager repository, for the PowerCenter repository, or as metadata sources.
Kerberos authentication
Metadata Manager can run on a domain that is configured with Kerberos authentication.
For information about configuring the domain to use Kerberos authentication, see the Informatica 9.6.0
Security Guide. For information about running Metadata Manager and mmcmd when the domain uses
Kerberos authentication, see the Informatica PowerCenter 9.6.0 Metadata Manager Administrator Guide.
Two-factor authentication
Metadata Manager can run on a Windows network that uses two factor authentication.
For information about creating resources, see the Informatica PowerCenter 9.6.0 Metadata Manager
Administrator Guide. For information about viewing resources, see the Informatica PowerCenter 9.6.0
Metadata Manager User Guide.
Resource Versions
You can create resources of the following versions:
• Microstrategy 9.3.1 and 9.4.1. Previously, you could create Microstrategy resources up to version 9.2.1.
• Netezza 7.0. Previously, you could create Netezza resources up to version 6.0.
For information about creating resources, see the Informatica PowerCenter 9.6.0 Metadata Manager
Administrator Guide.
Browser Support
You can run the Metadata Manager application in the Google Chrome web browser.
You can configure a session to override the schema that is specified in the Greenplum connection
object.
For more information, see the Informatica PowerExchange for Greenplum 9.6.0 User Guide for
PowerCenter.
PowerExchange for Hadoop supports following updated versions of Hadoop distributions to access
Hadoop sources and targets:
For more information, see the Informatica PowerExchange for Hadoop 9.6.0 User Guide for PowerCenter.
• You can use Microsoft Dynamics CRM Online version 2013 for online deployment.
• You can configure the number of rows that you want to retrieve from Microsoft Dynamics CRM.
• You can join two related entities that have one to many or many to one relationships.
• PowerExchange for Microsoft Dynamics CRM uses HTTP compression to extract data if HTTP
compression is enabled in the Internet Information Services (IIS) where Microsoft Dynamics CRM is
installed.
• You can configure the PowerCenter Integration Service to write records in bulk mode.
• You can change the location of the [Link] file and the [Link] files at run time.
For more information, see the Informatica PowerExchange for Microsoft Dynamics CRM 9.6.0 User Guide
for PowerCenter.
• PowerExchange for SAP NetWeaver uses SAP NetWeaver RFC SDK 7.20 libraries.
• You can enable partitioning for SAP BW sessions that load data to 7.x DataSources. When you enable
partitioning, the PowerCenter Integration Service performs the extract, transform, and load for each
partition in parallel.
• You can run ABAP stream mode sessions with the Remote Function Call communication protocol.
• You can install secure transports to enforce security authorizations when you use ABAP to read data
from SAP.
• When you extract business content data from SAP Business Suite applications, you can use data
sources that belong to a custom namespace.
• When you use timestamp-based delta pointers to extract business content data, you can extract the
changed data alone without doing a full transfer of the entire data.
For more information, see the Informatica PowerExchange for SAP User Guide for PowerCenter.
For more information, see the Informatica PowerExchange for SAS 9.6.0 User Guide for PowerCenter.
When you import Siebel business components, you can specify the name of the Siebel repository if
multiple Siebel repositories are available. You can create and configure the [Link] file
to add the Repository Name field to the Import from Siebel wizard in PowerExchange for Siebel.
For more information, see the Informatica PowerExchange for Siebel 9.6.0 User Guide for PowerCenter.
• You can configure a session so that Teradata PT API uses one of the spool modes to extract data
from Teradata.
• You can configure a session to use a character in place of an unsupported Teradata unicode
character while loading data to targets.
For more information, see the Informatica PowerExchange for Teradata Parallel Transporter API 9.6.0
User Guide for PowerCenter.
• The PowerCenter Integration Service can process SOAP 1.2 messages with RPC/encoded and
document/literal encoding styles. Each web service can have an operation that uses a SOAP 1.2
binding. You can create a Web Service Consumer transformation with a SOAP 1.2 binding.
• You can use PowerExchange for Web Services with SharePoint 2010 and 2013 as a web service
provider.
For more information, see the Informatica PowerExchange for Web Services 9.6.0 User Guide for
PowerCenter.
PowerExchange for HBase provides connectivity to an HBase data store. Use PowerExchange for HBase
to read data from the HBase columns families or write data to the columns families in an HBase table.
You can read or write data to a column family or a single binary column.
You can add an HBase data object operation as a source or as a target in a mapping and run the
mappings in the native or a Hive environment.
For more information, see the PowerExchange for HBase 9.6.0 User Guide.
You can configure the HTTP proxy server authentication settings at design time.
For more information, see the Informatica PowerExchange for DataSift 9.6.0 User Guide.
• You can extract information about a group, news feed of a group, list of members in a group, basic
information about a page, and news feed from a page from Facebook.
• You can configure the HTTP proxy server authentication settings at design time.
For more information, see the Informatica PowerExchange for Facebook 9.6.0 User Guide.
• PowerExchange for HDFS supports the following Hadoop distributions to access HDFS sources and
targets:
- CDH Version 4 Update 2
- HortonWorks 1.3.2
- MapR 2.1.3
- MapR 3.0.1
• You can write text files and binary file formats, such as sequence files, to HDFS with a complex file
data object.
• You can write compressed complex files, specify compression formats, and decompress files.
• The Data Integration Service creates partitions to read data from sequence files and custom input
format files that can be split.
For more information, see the Informatica PowerExchange for HDFS 9.6.0 User Guide.
• PowerExchange for Hive supports the following Hive distributions to access Hive sources and
targets:
- Cloudera CDH Version 4 Update 2
- HortonWorks 1.3.2
- MapR 2.1.3
- MapR 3.0.1
• You can write to Hive partitioned tables when you run mappings in a Hive environment.
• You can specify the full name of a person when you look up company information in LinkedIn.
• You can configure the HTTP proxy server authentication settings at design time.
For more information, see the Informatica PowerExchange for LinkedIn 9.6.0 User Guide.
• You can select specific records from Salesforce by using the filter from the query property of the
Salesforce data object read operation.
• You can use a Salesforce data object read operation to look up data in a Salesforce object.
• You can configure the HTTP proxy server authentication settings at design time.
For more information, see the Informatica PowerExchange for Salesforce 9.6.0 User Guide.
• PowerExchange for SAP NetWeaver uses SAP NetWeaver RFC SDK 7.20 libraries.
• You can install secure transports to enforce security authorizations when you use ABAP to read data
from SAP.
For more information, see the Informatica PowerExchange for SAP 9.6.0 User Guide.
• You can specify a list of user IDs or screen names in a .txt or .csv format to extract the profiles of
users. You can specify a valid user ID or a screen name to extract the profile of a user.
• You can configure the HTTP proxy server authentication settings at design time.
For more information, see the Informatica PowerExchange for Twitter 9.6.0 User Guide.
You can configure the HTTP proxy server authentication settings at design time.
For more information, see the Informatica PowerExchange for LinkedIn 9.6.0 User Guide.
Informatica Documentation
This section describes new guides included with the Informatica documentation. Some new guides are
organized based on shared functionality among multiple products and replace previous guides.
Contains general information about Informatica Analyst (the Analyst tool). Previously, the Analyst tool
was documented in the Informatica Data Integration Analyst User Guide.
Contains information about application services. Previously, the application services were documented
in the Informatica Administrator Guide.
Contains information about the Informatica Connector Toolkit and how to develop an adapter for the
Informatica platform. You can find information on components that you define to develop an adapter
such as connection attributes, type system, metadata objects, and run-time behavior.
Contains a tutorial on how to use the Informatica Connector Toolkit to develop a sample MySQL adapter
for the Informatica platform. You can find information on how to install Informatica Connector Toolkit
and on how to create and publish a sample MySQL adapter with the Informatica Connector Toolkit.
Contains information about discovering the metadata of source systems that include content and
structure. You can find information on column profiles, data domain discovery, primary key and foreign
key discovery, functional dependency discovery, Join analysis, and enterprise discovery. Previously, data
discovery was documented in the Informatica Data Explorer User Guide.
Contains information about Business Glossary. You can find information about how to manage and look
up glossary content in the Analyst Tool. Glossary content includes terms, policies, and categories.
Previously, information about Metadata Manager Business Glossary was documented in the Informatica
PowerCenter Metadata Manager Business Glossary Guide.
Contains information about exception management for Data Quality. You can find information about
managing exception record tasks in the Analyst tool. Previously, exception management was
documented in the Informatica Data Director for Data Quality Guide, Data Quality User Guide, and Data
Services User Guide.
Contains information about Model Repository views, Profile Warehouse views, and Business Glossary
views. Previously, this book was called the Informatica Data Services Model Repository Views and the
profile views were documented in an H2L article. The Business Glossary views is the new content added
in this book.
Contains information about Informatica Developer. You can find information on common functionality in
the Developer tool. Previously, the Developer tool was documented in the Informatica Developer User
Guide.
Contains information about configuring Model repository mappings. Previously, the mapping
configuration was documented in the Informatica Developer User Guide.
Contains information about mapping specifications. Previously, the mapping specifications were
documented in the Informatica Data Integration Analyst User Guide.
Contains information about profiles. The guide contains basic information about running column
profiles, creating rules, and creating scorecards. Previously, profiling was documented in the Data Quality
User Guide and Informatica Data Explorer User Guide.
Contains information about reference data objects. A reference data object contains a set of data values
that you can use to perform search operations in source data. You can create reference data objects in
the Developer tool and Analyst tool, and you can import reference data objects to the Model repository.
Previously, reference data objects were documented in the Informatica Data Quality User Guide.
Contains information about the Rule Builder feature in the Analyst tool. Use Rule Builder to describe
business rule requirements as a series of logical statements. You compile the logical statements into a
rule specification. The Analyst tool saves a copy of the rule specification as a mapplet in the Model
repository.
Contains information about security for the Informatica domain. Previously, Informatica security was
documented in the Informatica Administrator Guide.
This manual contains information about creating SQL data services, populating virtual data and
connecting to an SQL data service with third party tools. Previously, this book was called the Informatica
Data Services User Guide.
Enterprise Discovery
Effective in version 9.6.0, enterprise discovery includes the following changes:
• You can refresh the Model Repository Service to view the enterprise discovery results for data sources
from external connections.
Previously, after you ran an enterprise discovery profile, you had to reconnect to the Model Repository
Service.
• The Profile Model option in the profile wizard that you open by selecting File > New > Profile is renamed
to Enterprise Discovery Profile.
• The graphical view of the enterprise discovery results displays the data domains overlap in entities for
those data domains that you choose to include in the graphical view.
Previously, you verified the data domain discovery results for a single column.
363
Rules
Effective in version 9.6.0, you can select multiple input columns when you apply a rule to a profile in
Informatica Analyst.
Previously, you selected one input column when you applied a rule.
Scorecards
Effective in version 9.6.0, scorecards include the following changes:
• When you select the valid values for a metric, you can view the percentage of selected valid values and
count of total valid values.
Previously, you could view the count of total valid values in the column.
• When you view the source data for a metric, by default, the Drilldown section displays the rows of source
data that are not valid.
Previously, the default value was to display rows that are valid.
• In the scorecard results, you can select a score and click the trend chart arrow to view the trend chart.
Previously, you right-clicked the score and selected the Show Trend Chart option.
Previously, the transformation used version 5.3.1 of the Address Doctor software engine.
Previously, the Analyst tool read exception records from a staging database that the Analyst Service
identified.
365
To continue to analyze the records in the staging database after you upgrade, perform the following steps:
Previously, users logged in to Informatica Data Director for Data Quality to review and update the records that
a Human task specified.
Java Transformation
Effective in version 9.6.0, the Stateless advanced property for the Java transformation is valid in both the
native and Hive environments. In the native environment, Java transformations must have the Stateless
property enabled so that the Data Integration Service can use multiple partitions to process the mapping.
Previously, the Stateless property was valid only in the Hive environment. The Data Integration Service
ignored the Stateless property when a mapping ran in the native environment.
Mapping Parameters
Effective in version 9.6.0, the user-defined parameter that represents a long value is named Bigint. Previously,
this user-defined parameter was named Long.
Effective in version 9.6.0, parameter names that are defined in reusable transformations, relational,
PowerExchange, and flat file data objects, and that begin with the dollar sign ($) are renamed to a unique
name in the Model repository. However, the parameter name is not changed in the parameter file. Previously,
you could use the dollar sign ($) as the first character in mapping parameter names.
Previously, a Match transformation treated null data values and empty data fields as identical data elements
in identity match analysis.
Previously, you did not have to install an SQL client because Informatica used the Microsoft OLE DB provider
for native connectivity.
If you upgrade from an earlier version, you must install the Microsoft SQL Server 2012 Native Client for the
existing mappings to work.
Previously, the Data Integration Service did not use the transformation functions for port-to-port conversions.
The Data Integration Service used a separate algorithm. If the data that you passed contained data that was
not valid for the conversion datatype, the Data Integration Service dropped the value and used a substitute
value.
Upgraded mappings that use port-to-port data conversion might produce different output data. For example,
a mapping in a previous version produced the following output:
"0.377777","0.527777","0.000000","0.250000","0.000000","0.377777","0.250000"
After you upgrade, the same mapping might produce the following output:
"0.377777","0.527777","0","0.25","0","0.377777","0.25"
Previously, you verified the data domain discovery results for a single column.
• Effective in version 9.6.0, you can use wildcards when you search a reference table for data values in the
Developer tool. When you search a reference table for data values, the search is not case-sensitive in the
Developer tool.
Previously, you performed wildcard searches and searches that are not case-sensitive in the Analyst tool.
• Effective in version 9.6.0, the Data Integration Service stores a single instance of a reference table in
memory when multiple mappings in a process read the reference table.
Previously, the Data Integration Service stored an instance of the reference table in memory for each
mapping.
Rules
Effective in version 9.6.0, you can select multiple input columns when you apply a rule to a profile in
Informatica Analyst.
Previously, you selected one input column when you applied a rule.
Scorecards
Effective in version 9.6.0, scorecards include the following changes:
• When you select the valid values for a metric, you can view the percentage of selected valid values and
count of total valid values.
Previously, you could view the count of total valid values in the column.
• When you view the source data for a metric, by default, the Drilldown section displays the rows of source
data that are not valid.
Previously, the default value was to display rows that are valid.
• In the scorecard results, you can select a score and click the trend chart arrow to view the trend chart.
Previously, you right-clicked the score and selected the Show Trend Chart option.
Java Transformation
Effective in version 9.6.0, the Stateless advanced property for the Java transformation is valid in both the
native and Hive environments. In the native environment, Java transformations must have the Stateless
property enabled so that the Data Integration Service can use multiple partitions to process the mapping.
Previously, the Stateless property was valid only in the Hive environment. The Data Integration Service
ignored the Stateless property when a mapping ran in the native environment.
Previously, you did not have to install an SQL client because Informatica used the Microsoft OLE DB provider
for native connectivity.
If you upgrade from an earlier version, you must install the Microsoft SQL Server 2012 Native Client for the
existing mappings to work.
369
Port-to-Port Data Conversion
Effective in version 9.6.0, the Data Integration Service uses the conversion functions in the transformation
language to perform port-to-port conversions between transformations. The Data Integration Service
performs port-to-port conversions when you pass data between ports with different datatypes. If the data
that you pass is not valid for the conversion datatype, a transformation row error occurs.
Previously, the Data Integration Service did not use the transformation functions for port-to-port conversions.
The Data Integration Service used a separate algorithm. If the data that you passed contained data that was
not valid for the conversion datatype, the Data Integration Service dropped the value and used a substitute
value.
Upgraded mappings that use port-to-port data conversion might produce different output data. For example,
a mapping in a previous version produced the following output:
"0.377777","0.527777","0.000000","0.250000","0.000000","0.377777","0.250000"
After you upgrade, the same mapping might produce the following output:
"0.377777","0.527777","0","0.25","0","0.377777","0.25"
Previously, you verified the data domain discovery results for a single column.
Rules
Effective in version 9.6.0, you can select multiple input columns when you apply a rule to a profile in
Informatica Analyst.
Previously, you selected one input column when you applied a rule.
Scorecards
Effective in version 9.6.0, scorecards include the following changes:
• When you select the valid values for a metric, you can view the percentage of selected valid values and
count of total valid values.
Previously, you could view the count of total valid values in the column.
• When you view the source data for a metric, by default, the Drilldown section displays the rows of source
data that are not valid.
Previously, the default value was to display rows that are valid.
Scorecards 371
Chapter 30
372
Chapter 31
Informatica Services
Effective in version 9.6.0, the Informatica Services include the following changes:
• On Windows, when you run the command [Link] startup to start the Informatica services, the
ISP console window runs in the background.
Previously, the window appeared in the foreground when you ran [Link] startup to start the
Informatica services. Also, if you encounter error messages during the Service Manager startup, the
installer saves the error messages to the [Link] and [Link] log files.
• On Windows, you must be a user with administrative privileges to start the Informatica services from the
command line and the Windows Start menu.
Previously, the user did not need administrative privileges to start the Informatica services.
373
Analyst Service
The following changes apply to the Analyst Service in version 9.6.0:
• Effective in version 9.6.0, the Analyst Service identifies the Data Integration Service that runs Human
tasks.
Previously, the Data Director Service identified the Data Integration Service that runs Human tasks.
• Effective in version 9.6.0, the Staging Database property is obsolete.
Previously, the Analyst Service used the Staging Database property to identify the database that contained
exception record tables.
Previously, you set the Max Result Count property on the Address Validator transformation.
Previously when you ran Data Integration Service jobs in separate operating system processes, each job ran
in a separate DTM process. One DTM process ran a single DTM instance. When you ran jobs in separate
operating system processes, the Data Integration Service ignored the connection pooling properties.
Previously, you configured a Data Director Service to identify the Data Integration Service that runs Human
tasks. To identify the Data Integration Service that runs Human tasks in version 9.6.0, configure the Human
Task Properties on the Analyst Service.
The Informatica 9.6.0 upgrade process upgrades a Data Director Service to an Analyst Service. If you upgrade
an Informatica domain that includes a Data Director Service and an Analyst Service, the upgrade process
creates a separate Analyst Service for each service. After you upgrade, you can keep the Analyst Services in
the domain. Optionally, you can merge the services.
Previously, TDM was independent of the Informatica domain and not a service on the domain.
• Create projects.
• Edit projects. Users must also have Write permission on the project.
• Delete projects that the user created. Users must also have Write permission on the project.
Previously, when users had the Create Projects privilege for the Model Repository Service, they could create
projects. When users had Write permission on the project, they could edit and delete the project.
Domain Security
Effective in version 9.6.0, the Enable Transport Layer Security (TLS) for the domain option in the
Administrator tool is renamed Enable Secure Communication. The Enable Secure Communication option
secures the communication between the Service Manager and all services in the Informatica domain. You
can specify a keystore and truststore file for the SSL certificate.
Previously, the Enable Transport Layer Security (TLS) for the domain option in the Administrator tool did not
enable secure communication for the PowerCenter services. The option used the default Informatica SSL
certificate.
Previously, you did not have to install an SQL client because Informatica used the Microsoft OLE DB provider
for native connectivity.
If you upgrade from an earlier version, you must install the Microsoft SQL Server 2012 Native Client for the
existing mappings to work.
Previously, pmrep stored the connection information in [Link] in the directory where you started pmrep.
376
Repository Connection File
Effective in version 9.6.0, each time you run pmrep connect, the command deletes the [Link] file. If the
pmrep connect command succeeds, the command replaces the [Link] file with the repository connection
information.
Previously, the pmrep connect command would not delete the [Link] file each time you ran pmrep
connect.
Previously, you had to set umask to 000 to enable operating system profiles to access files written by the
DTM.
If you upgrade from an earlier version, the umask setting is not changed. You can change the umask setting
before or after you upgrade. For example, you can change umask to 077 for maximum security. If you change
the umask setting after you upgrade, you must restart the Informatica services.
Previously, you could not enable the Data Integration Service to use multiple partitions to process a mapping
in the native environment. By default, each upgraded mapping has a maximum parallelism value of one. As a
result, partitioning is disabled for upgraded mappings.
378
Chapter 34
Browser Support
Effective in version 9.6.0, the Metadata Manager application can run in the following web browsers:
• Google Chrome
• Microsoft Internet Explorer
Previously, the Metadata Manager application could run in the following web browsers:
• Cognos
• Oracle Business Intelligence Enterprise Edition
• Sybase PowerDesigner
379
Previously, you had to install the Metadata Manager Agent separately to extract metadata from these
sources.
If you have a Metadata Manager business glossary that you created in a previous version of Metadata
Manager, you must export the glossary from the previous version of Metadata Manager before you upgrade
to version 9.6.0. After you upgrade, you can import the glossary into Informatica Analyst. To view the
Informatica Analyst business glossary in Metadata Manager, create a Business Glossary resource in
Metadata Manager 9.6.0.
For information about creating and configuring Business Glossary resources in Metadata Manager, see
Informatica PowerCenter 9.6.0 Metadata Manager Administrator Guide. For information about viewing
Business Glossary resources in Metadata Manager, see Informatica PowerCenter 9.6.0 Metadata Manager
User Guide.
mmcmd Changes
Domain Security Changes
Effective in version 9.6.0, mmcmd has the following changes related to domain security:
Environment Variables
You might have to configure environment variables to run mmcmd. If the domain uses Kerberos
authentication, you must set the KRB5_CONFIG environment variable on your system or in the mmcmd
batch file. If secure communication is enabled for the domain, you must set the INFA_TRUSTSTORE and
INFA_TRUSTSTORE_PASSWORD environment variables in the mmcmd batch file.
Previously, you did not have to configure environment variables for mmcmd.
Command Options
All mmcmd commands that authenticate with the domain contain options related to Kerberos
authentication. You must specify the options if the domain uses Kerberos authentication.
Option Description
--domainName (-dn) Required if you use Kerberos authentication and you do not specify the --gateway
option. Name of the Informatica domain.
--gateway (-hp) Required if you use Kerberos authentication and you do not specify the --
domainName option. Host names and port numbers of the gateway nodes in the
domain.
--keyTab (-kt) Required if you use Kerberos authentication and you do not specify a password. Path
and file name of the keytab file for the Metadata Manager user.
--mmServiceName (-mm) Required if you use Kerberos authentication. Name of the Metadata Manager
Service.
--namespace (-n) Required if the domain uses LDAP authentication or Kerberos authentication.
Optional if the domain uses native authentication. Name of the security domain to
which the Metadata Manager user belongs.
--password (-pw) Required if you do not use Kerberos authentication. Also required if you use
Kerberos authentication and you do not specify the --keyTab option. Password for
the Metadata Manager user.
--securityDomain (-sdn) Required if the domain uses LDAP authentication or Kerberos authentication.
Optional if the domain uses native authentication. Name of the security domain to
which the Informatica domain user belongs.
Command Description
migrateBGLinks Restores the related catalog objects for a business glossary after you upgrade from version 9.5.x.
Previously, you did not have to install an SQL client because Informatica used the Microsoft OLE DB provider
for native connectivity.
If you upgrade from an earlier version, you must install the Microsoft SQL Server 2012 Native Client. Install
the client so that the Metadata Manager Service can connect to Microsoft SQL Server databases.
Previously, you edited the resource, selected the string of dots in the Password field, and entered the new
password.
For more information, see the End of Life (EOL) document at the following location:
[Link]
When you configure an HDFS connection, the default Hadoop distribution is Cloudera distribution. Previously,
the default was Apache distribution.
383
PowerExchange for LinkedIn
Effective in version 9.6.0, Informatica is not shipping PowerExchange for LinkedIn for PowerCenter.
Informatica dropped support for versions 9.1.0, 9.5.0, and 9.5.1. You cannot upgrade from versions 9.1.0,
9.5.0, 9.5.1, and the hotfix versions. Sessions will fail in versions 9.1.0, 9.5.0, and 9.5.1, and the hotfix
versions.
For more information, see the End of Life (EOL) document at the following location:
[Link]
Previously, you had to download and use version 6 of the Java Cryptography Extension (JCE) Unlimited
Strength Jurisdiction Policy Files.
PowerExchange for SAP NetWeaver uses SAP NetWeaver RFC SDK 7.20 libraries. You must install SAP
NetWeaver RFC SDK 7.20 libraries to run PowerExchange for SAP sessions.
Previously, you installed SAP RFC SDK classic libraries to run sessions.
You use the [Link] file to configure RFC-specific parameters and connection information.
Previously, you used the [Link] file to configure RFC-specific parameters and connection
information.
If you upgrade from an earlier version, you must create a [Link] file to enable communication
between PowerCenter and SAP. You cannot use the [Link] file to enable communication between
PowerCenter and SAP.
For more information, see the Informatica PowerExchange for SAP 9.6.0 User Guide for PowerCenter.
You need not use the SAP connection parameter TYPE in the [Link] file to configure the
connection type. The PowerCenter Client and PowerCenter Integration Service use the connection
parameters that you define in the [Link] file to infer the connection type.
For example, if you set the ASHOST parameter, the PowerCenter Client and PowerCenter Integration
Service create a connection to a specific SAP application server. If you set the MSHOST and GROUP
parameters, the PowerCenter Client and PowerCenter Integration Service create an SAP load balancing
Previously, you used the parameter TYPE to configure the connection type. For example, you set TYPE=A
to create a connection to a specific application server. You set TYPE=B to create an SAP load balancing
connection and you set TYPE=R to create a connection to an RFC server program registered at an SAP
gateway.
If you upgrade from an earlier version, you must create a new [Link] file and configure the
connection parameters based on the type of connection that you want to create.
For more information, see the Informatica PowerExchange for SAP 9.6.0 User Guide for PowerCenter.
PowerExchange for SAP NetWeaver uses the RFC protocol to generate and install an ABAP program in
stream mode.
Previously, PowerExchange for SAP NetWeaver used the CPI-C protocol to generate and install an ABAP
program in stream mode.
Effective in version 9.6.0, the CPI-C protocol is deprecated and Informatica will drop support in a future
release. You can run existing ABAP programs that use the CPI-C protocol. However, you cannot generate
and install new ABAP programs that use the CPI-C protocol.
When you install an existing ABAP program that uses the CPI-C protocol, you are prompted to overwrite
the program to use the RFC protocol. Informatica recommends overwriting the program to use the RFC
protocol.
Effective in version 9.6.0, Informatica dropped support for deprecated BAPI mappings created in
versions earlier than 8.5 and deprecated IDOC mappings created in versions earlier than 7.1. If you
upgrade the deprecated mappings to version 9.6.0, the sessions will fail.
Upgrade PowerExchange for SAP NetWeaver and create new BAPI and IDoc mappings with custom
transformations.
For more information, see the End of Life (EOL) document at the following location:
[Link]
Previously, you could only create an operation with a SOAP 1.1 binding. You could only create a Web Service
Consumer transformation with a SOAP 1.1 binding.
NTLMv2
Effective in version 9.6.0, the external web service provider authenticates the PowerCenter Integration
Service by using NTLM v1 or NTLM v2.
Previously, the external web service provider used only NTLM v1 to authenticate the PowerCenter Integration
Service.
387
PowerExchange for LinkedIn
Effective in version 9.6.0, PowerExchange for LinkedIn installs with Informatica 9.6.0.
Previously, you installed SAP RFC SDK classic libraries to run sessions.