Summary
· Having 14 years of IT professional Experience in the IT industry with Specialization in the field of Data Warehousing and
was completely in charge of day-to-day deliverables during development. · Having 7 years of experience on ETL
Development in Information technology using IBM WebSphere / Info Sphere Data Stage 8.5/8.7/9.1/11.3 /11.7 and
extensively worked with Data Stage Designer, Director, complete Software Development Life Cycle (SDLC) experience with
system design, development, implementation, testing. · Having 5 years’ experience in Data Profiling and Data Quality using
IBM Information Analyzer and Quality Stage 11.7 · Having 2 year experience on Trillium Control Center 14 & Informatica
Power Center · Designed and developed parallel jobs, server and sequence jobs using Data stage Designer. Experience in
using different types of stages like Transformer, Aggregator, Merge, Join, Lookup, Sort, copy, remove duplicate, Funnel,
Filter, Pivot, Shared containers for developing jobs. · Worked and extracted data from various data sources such as Oracle,
DB2, XML and Flat files. · Good Knowledge about the principles of DW like Data marts, OLTP, OLAP, Dimensional Modeling,
fact tables, dimension tables and star/snowflake schema modeling. · Extensive experience in Unit Testing, Functional
Testing, System Testing, Integration · Created local and shared containers to facilitate ease and reuse of jobs... · Experience
Good Experience in Banking Domain · Quick learner and adaptive to new and challenging technological environments. ·
Having good experience with CICD deployment tools Team City & Octopus and GIT Hub · Having Good Knowledge on
Snowflake
Experience
Wipro Technologies
Technical Lead | 05/2022 - Present
· Worked as a Senior Consultant in Encora Innovative Labs Private Limited from January 2021 to till May 2022
Accenture Solutions Pvt Limited | KA
ETL Team Lead | 10/2016 - 08/2024
IBM India Pvt. Ltd | Hyderabad, PSD
ETL Developer | 06/2014 - 08/2024
SARAL INFOTECH SYSTEM
ETL Developer | 03/2010 - 08/2024
M.C.A from JNTU | Hyderabad, PSD
Major Assignments: #1 Project Name: Data Controls Automation Client: HSBC
Duration
Technical Lead | 01/2022 - Present
The key objectives of this project engagement is to automate and develop the self-servicing tools for the future use of DMOV
controls implementation and to support ITSO s in accelerating the DMOV1.0 and DMOV 2.01 controls implementation, for the
list of US applications and make the applications US OCC compliant with in the specific timelines in 2023. Responsibilities: ·
Interacted with ITSO and ITSO delegates to get the complete data flow details for each interface for various data transfers. ·
Collected Accuracy and Timeliness evidence from sending and receiving ITSO’s for different interfaces of various data
transfers like File (SFTP, Connect Direct) and API and Database and Message Queue. · Created evidence packs for
interfaces of various data transfer method like File (SFTP, Connect Direct) and API and Database and Message Queue. ·
Used SST Application automation tool for creating evidence packs for different data transfer method like File (SFTP, Connect
Direct) and API and Database and Message Queue. · Involved in doing the review of evidence pack created by the team
members. · Involved in required evidence gathering of data transfer through juniper tool. Environment: Python, confluence,
Jira. ODS, ESP Portal. CMS portal,unix,oracle,data stage 11.7 Major Assignments: #2 Project Name: Coordinating Data
Sourcing Client: HSBC Duration: May 2022- Dec 2022. Role: Technical Lead Coordinating Data sourcing team responsible for
to ingest the data from difference source systems into IHUB (Google Big Query) Ingested Blackrock source JSON files into
big query using Juniper pipeline. Ingested WGHSS Domestic and Expat AS400 DB2 data into Big Query using Juniper
pipeline. Ingested SmartView RM360 csv file data into big query using juniper pipeline. Responsibilities: · Interacted with
Business users and Technical Architects to analyze the data & gathering the requirements from various sources. · Created
System requirement Documents. · Involved in Juniper pipeline creation for to ingest Blackrock and WGHSS Domestic and
Expat Data ingestion into IHub. · Worked in Python Automation scripts for to generate PSV, config and BQ Load scripts
creation. · Involved in Platform request creation for Service account creation and data sets creation. · Involving preparation of
raw sheet and Data dictionary and Compatible sheet creation. Environment: Juniper, Google Cloud Platform, Python, Linux,
SharePoint, Jira. Major Assignments: #3
Client: DHL
01/2021 - 08/2024
Role: Senior Consultant Omni channel program aims to establish person identity which could be utilized to uniquely identify
individuals during interactions within the DHL Express framework.in Phase1 for pilot run As CSV now will flow the email
address to ANIDB and ANIDB flat files as a source and Quality stage process will identify the deduplication that business is
agree and profiling. The scope for the ETL Layer is to De-duplicate the data from source applications (e.g. ANIDB) then send
the file to consumer applications i.e. Customer Service Management. Responsibilities · Responsible for understanding the
business requirements and designing and building applications according to the requirements. · Working with SME in
gathering all requirements and collecting all required information for development. · Worked with Investigate to identify the
potential anomalies in source system and used Standardize stage to Standardize the Name and address data. · Worked with
Match Designer tool to create Mass Passes which is required for to call in Match Frequency stage and One-source and two
source match stage. · Worked with One-source match stage to do identify the duplicate records from source. · Involved in
the preparation of System requirement Specification document. · Provide the technical assistance to the team whenever if
team needed the support. Environnent : Datastage&QualityStage11.7, Information Analyzer11.7, Unix. Major Assignements :
#4 Project Name: Data Load Match Application (DLMA) Client: Commonwealth Bank of Australia Duration: Jun 2019-Sept
2020 Role: Team lead Data Load and matching application provides the apply the standardization, validation and parsing
rules and report validation/exception errors for product systems. Provides the ability to identify and to provide updates to the
SAP for New customers, changes to the existing customers, address details, new accounts, changes to the existing
accounts, parties linked to new accounts and changes. Responsibilities: · Responsible for understanding the business
requirements and designing and building applications according to the requirements. · Working with SME in gathering all
requirements and collecting all required information for development. · Design, Develop and Unit Test Complex ETL jobs.
Performance tuning of ETL jobs. · Involved in analyzing the quality of the jobs developed by the team members and providing
the suggestions to improve the performance. And did the Performance Tuning · Used Data Stage sequencer jobs extensively
to take care of inter dependencies and to run data stage server/parallel jobs in order. · Working with peers to follow the
business software process. · Coordinate with team members and administer all onsite and offshore work packages. · Worked
in designing Job Batches and Job Sequences for scheduling parallel jobs using UNIX scripts, Autosys Jobs Jils and Data
Stage Director. · Wrote Extensive Unix scripts for running the Data Stage jobs. · Developed Data Stage job sequences used
the User Activity Variables, Job Activity, Wait for File stages, Execute Command, Loop Activity, Terminate · Used Data
Stage Parallel Extender stages namely Datasets, Sort, Lookup, Peek, Standardization, Row Generator stages, Remove
Duplicates, Filter, External Filter, Aggregator, Funnel, Modify, and Column Export in accomplishing the ETL Coding. · Wrote
Release notes, Deployment documents and scheduled the jobs via the Autosys. · Involved in giving support to the Prod
support team during code deployment. Environnent : Datastage&QualityStage9.1, Information Analyzer9.1, Auto Sys, SQL,
Unix, Oracle 10g, SAP UI
Major Assignments
Project Name: Nordea Collateral Solution | 10/2016 - 08/2024
Role: Team Lead Data will be extracted from the current Legacy Systems in the four Countries (Norway, Finland, Denmark,
Sweden) and then via Extracted, Transformed and Loaded setup consolidated into a shared DB of COLLATE, where it will be
separated into different entities need as per the functional requirements, separation expected to be on LEGAL entities to
match COLLATE structure. The Migration setup architecture will be done in a way that ensure integrity and validity when
producing the dataset for the target system load, whereas target is the migration tables of the COLLATE DB model.
Validation is done within the cleansing phase, in the migration phase and finally in the Cut-Over phases. Validation will be
done to ensure that data meets the requirements and expectations in the given phase. Responsibilities: · Involved in
understanding Business Process and coordinated with Business Analysts to get specific user requirements. · Extracted data
from sources like Oracle and Flat Files. · Developed End to End Jobs from Legacy to Staging and Staging to Work Staging
and from work staging to work target and work target to target and target to MIG. · Developed Data stage parallel jobs-based
Mapping document. · Developed UNIX shell scripts DCM MIG Utility script and Live Utility script which loads the data from
source to until MIG. · Performed the Unit testing for jobs developed to ensure that it meets the requirements. · Designed and
developed parallel jobs using Data stage Designer. · Experience in using different types of stages like Transformer,
Aggregator, Merge, Join, Lookup, and Sort, Remove duplicate, Funnel, Filter, Pivot, Shared containers for Developing jobs. ·
Involved in performing unit testing. · Used diverse partitioning methods like Auto, Hash, Same, Entire etc. · Involved in
performing data profiling using IBM Ionosphere Information Analyzer Environment: Datastage&QualityStage11.3, Information
Analyzer11.3, SQL Developer, UNIX, Oracle 11i Major Assignments: #6 Project Name: BPaid Migration Client: Barclay Card,
UK. Feb2014– Oct2016 Role: ETL developer The Global Payment Acceptance (GPA) business unit within Barclaycard is
responsible for card acquiring services offered to its merchant customers across the UK and continental Europe. They cover
about 38% market share in UK. The Barclaycard’s current portfolio of acquired merchants is principally hosted across two
processing systems, namely Darwin and CAMSII. There are approximately 300K merchants being managed on these
systems with a degree of interdependence existing between the two. They are to be replaced by a newly established
installation of Bank WORKS which is to be hosted and managed internally by Barclaycard. The key drivers for data migration
are a combination of Business and Operational Needs, Compliance Needs and Data Needs of the Target System
Responsibilities: · Responsible for managing scope, planning, tracking, change control, aspects of the project. · Translate
customer requirements into formal requirements and design documents, Establish specific solutions and leading the efforts
including programming and testing that culminate in client acceptance of the results. · Responsible for business requirements
gathering /discussions and analysis, also adhere to customer policies and standards. · Development of mapping
specifications/ job designs/ SQL queries / UNIX scripts. · Involved in conducting technical discussions for onboarding
suitable ETL resource into the team. · Involved in Unit/Regression /Integration testing, issue Analysis & preparation of test
case documents. · Developed queries which can create hierarchy of the Merchants. · Coordination with team and support
them for business understanding & implementation. · Supporting different teams to in the account for any ETL
issues/solution implementation designs/job development. · Responsible for knowledge transfer to team members on
Functional knowledge for the BOC and Bank works module. Environment: Oracle 11g, Informatica 9.1.1, UNIX, Trillium
Control Centre 14, HP Quality Center10 Major Assignments: #7 Project Name: Blue Harmony Client: IBM Duration: Mar2010–
Dec2013 Role: Data profiling analyst International Business Machines (IBM) is implementing SAP R/3 ERP Core Component
(ECC) 6.0 for Finance (FI), Controlling (CO), Materials Management (MM), Project System (PS), Workflow (WF) and Sales
and Distribution (SD). The Blue Harmony Global SAP Implementation Project is an IBM initiative that will replace legacy ERP
applications across the entire company with one common SAP ERP platform. The key objective of the Blue Harmony SAP
Global Implementation Project is that the single SAP platform will utilize common global business processes and best
practices to allow IBM to achieve business benefits throughout the globe. The approach that IBM will follow to achieve this
objective is to conduct the Blue Harmony SAP Global Implementation Project in two Waves that take advantage of core
competencies and focus on businesses and business applications that can provide significant return on IBM’s investment.
Data Integration Team is one of the sub teams in Blue Harmony, this Team develops Objects using Data stage Tool with SAP
Pack. It receives data from Legacy Systems, Loads the transformed data into SAP application in the form of IDOCs, BAPI,
ABAP etc. Responsibilities: · Understanding the business specification documents and customer requirements. · Involved
Extracting, Transforming and Loading data from sources like Flat files, Relational databases and placing it into respective
targets. · Used most of the stages such as Sequential file, Aggregator, Sort stage, Join stage, Look-up stage, Funnel stage,
Copy stage, Filter stage and Transformer stage, DB2 Connector stage etc. · Involved in creation of source to staging,
Staging to alignment, alignment to preload jobs. · Analyzed the customer source data and designed Quality Stage Jobs for
Standardization Matching and Survivorship to achieve a single view of the customer based on the business rules. ·
Effectively used Standardize Stage in Standardizing the Source data by using the existing Rule sets like Name, Address etc.
· Effectively used Data rules stage implementing business rules to validate source data. · Involved in creating Data stage job
sequence jobs for running IA data rules through data stage automation jobs. · Analyzed the customer source data and
designed Quality Stage Jobs for Standardization, Matching and Survivorship to achieve a single view of the customer based
on the business rules. · Built several Sequencer jobs using stages like Job Activity, Notification Activity, Routine Activity,
Execute Command, User Variable Activity, Sequencer, and Wait for File Activity, Terminator Activity, Start Loop Activity and
End Loop Activity stages. Environment: IBM Data stage 8.7 with SAP Packs, Information Analyzer 8.7, Quality stage8.7
Database: DB2; File System: AIX, Clear Quest. Non-Business Non-Business Non-Business
Skills
ETL Tools, : Data stage, Informatica Power Center 9.1, Languages, Database/Applications, : Oracle 8i/9i/10g, DB2,
Operating Systems, Packages, : MS Suite, GUI Tools, Developer, Profiling Tools, : Trillium Control Centre, Information
Analyzer, Google Cloud Platform, Educational Qualification:, 8.2/8.7/9.1/11.3, 11.7, 8.2/8.7/9.1/11.3, 11.7, SQL, PLSQL, C, C
++, MS DOS, Windows NT/2000, Sun Solaris 4.0, Toad 7.6/9.5, Oracle SQL, Snowflake, Snow SQL, Octopus, Git Hub
Education
JNTU
M.C.A | 01/2009