Cloud Data Architect +* (***) *** - **** **********@*****.***
Nandigama Ramprasad
• A versatile IT professional with nearly 15 years of experience, primarily in Snowflake Development, Teradata Development, ETL Development, Data Engineering & Analysis and Testing of applications in Data Warehouse across domains.
• 1.9 years of onsite experience in USA working directly with clients and stake holders for requirement gathering and design discussions.
• Working experience in different domains like banking, retail, telecom and healthcare. Hands-on experience designing and implementing Snowflake data warehouses and data lakes. Experience working with Snowflake Functions, Snowflake utilities, stages, file formats, zero copy cloning, time travel, fail safe, tasks, streams, SnowPipe, SnowSQL, Snowpark, stored procedures with snowflake SQL.
• Expertise in Snowflake concepts like RBAC controls, virtual warehouse, query performance tuning, resource Monitors. Expertise in understanding of Snowflake Micro partitions, Clustering, Multi-cluster Size and Credit Usage.
• Experience in bulk loading from external stage (AWS S3, Azure blob storage), internal stage to snowflake tables using COPY command.Experience in snowflake cost optimization by optimal warehouse size selection, auto- suspend idle warehouses and adjust default query time out value.
• Experience in handling semi-structured data (JSON, XML), columnar PARQUET using the VARIANT attribute in Snowflake.Working experience with AWS S3, Lambda, EC2, Glue and other services provided by AWS.
• Experience in data modeling and source-to-target mapping skills. Data engineering expertise in designing and developing scalable data pipelines and infrastructure. Experience in migrating on premise Teradata database objects and data to Snowflake. Implementing ETL pipelines within and outside of a data warehouse Python and Snowflakes Snow SQL.
• Experience in using python methods, functions, packages and libraries like Pandas, boto3, NumPy
• Designed and implemented source-to-target mappings using Informatica IICS, ensuring accurate and efficient data movement. Implemented SCD logic and incremental load strategies in IICS.
• Performed loads into snowflake instance using snowflake connector and created mappings, mapping tasks, task flows based on requirement in CDI.
• Extensively worked on various transformations like Lookup, Joiner, Router, Rank, Sorter, Aggregator, Expression, etc.
• Expert in using data mover tool and framework for migrating data across different environments in Teradata.
• Expert in using Teradata viewpoint for query monitoring, system health checks, managing workloads, and user management. Data Modeling, Data Profiling, Data Analysis experience using Dimensional Data Modeling, Data Vault Modeling. Hands on experience with modeling using Erwin in both forward and reverse engineering cases.
• Implemented a CI/CD pipeline using Jenkins and Docker. Experience in using DBT (data build tool) to execute ELT pipelines in Snowflake.
T E C H N I C A L S K I L L S
Databases Snowflake, Aws Redshift, AzureCosmosdb,Teradata12 –16(Database & UDA), Oracle 9i, SQL Server 2000/2005
Cloud Platforms AWS (Rds,s3,Glue,Emr,Data Dog,Cloud Watch,Lambda,Iam,Sns,Sqs,Aws Connect,Lambda,Step Function, S3 Archive)
BigData Eco System Spark, pyspark, databricks, Hive, HDFS Open Source SQL-Workbench,Oracle SQL-Developer
Languages SQL, PL/SQL, SnowSQL,Python
Software/Tools Snowpark,Teradata SQL Assistant,Tivoli, View Point (Teradata 14),DBT ETL/Data Pipeline Talend, Teradata Utilities,IIDR,ADF(Azure) Data Visualization &Reporting Tool Tableau, Power BI Data Modeling Erwin,MIS Studio,draw.io
Configuration Tool VSS, Tortoise-SVN, Start-Team
CDC Tool IIDR, Qlik Replicate
Others Mainframe, Autosys, Informatica, Datastage, ServiceNow, Unix, Linux P R O F E S S I O N A L E X P E R I E N C E
• Elevance Health - Anthem (USA) Snowflake Architect from Aug 2023 to till date
• Bank Of America Cloud Data Architect from Jan2016 to May 2023
• Teradata Corporation (Pune) Technical Consultant from Aug 2013 to Jan 2016
• Syntel (Mumbai) SSE from Apr 2012 to Aug 2013
• Wipro Technologies (Bengaluru) from Dec 2011 to Mar 2012
• E business ware. (Gurgaon) Application Lead from Sept 2011 to Dec 2011
• Mahindra Satyam (Tech Mahindra) (Hyderabad) Software Engineer from Aug 2010 to Sept2011 K E Y P R O J E C T S
Elevance Health (Anthem) (Since June23 – till date) Client: Elevance Health (Anthem)
Role: Snowflake Architect
Key Result Areas:
Discovery assessment for on-prem oracle data-warehouse system to migrate Snowflake (AWS) platform
Platform and Server sizing, cost estimation to run workload in snowflake platform (Storage & Compute)
Performance matrix evaluation for Snowpark(SQL API) Vs Snowflake SQL script Stored procs
Oracle to Snowpark SQL API (Stored Proc) code conversion and code optimization using dataframes
Parallel run, CDC strategy design Data model recommendation for aggregate tables
Collibra Data Product creation and integration with snowflake
Data pipeline design orchestration using Airflow for transaction consistency workload
Python based frame work solution designing. Technology assessment for history and CDC(ADF & IIDR) pipelines based on existing and new offerings. Design and implementation of Medallion Architecture for real time and ad-hoc BI reporting .MVP for one subject area for 5 TB data migration. Near real time data integration using snow-pipe
Transaction consistency solution design for real-time data. Data movement between raw to conform and finally model layer using stream, task, and snow-park/sp capability to model layer. Aged data beyond 5 years will be modeled as per consumption patter and maintain in data lake under conformed layer.
Archival and purge strategy. Stream and dynamic query usage in some of use cases for hot and cold data access and refresh.
Platform and Server sizing recommendation, cost matrix building as per projected workload
Snowflake platform cost (Storage & Compute) calculation using metadata query building Data mesh offering solution design using various technology stacks.
Partnership management and negotiation with Teradata team to build capability on Teradata Vantage in migration and internal tools offering under migration umbrella
Patient submission for cloud Data warehouse to Teradata Vantage migration framework
Creation of Discovery, Migration recommendation and TD Object Complexity(sp/views/bteq/macros/udf etc.) Analyzer .
Snowflake Competency & CoE building on Modern Data warehouses area. Managed Snowflake & Teradata partnership and provided growth plan from CoE end.
Nurtured and trained 20+ resources on snowflake. Trained 30+ resource on Teradata and run certification drive to get 20+ certified professionals
Integration of Teradata-to-Teradata Vantage migration capability and validation in house tool (SDTT)
During RFPs support presales/sales teams including solution architecture/estimates/resource loading plans
Solution for business cases like Teradata migration/offloading to cloud/low cost environment or Hybrid architecture. POCs for workload management and simulation for Teradata and Snowflake platform
Support on demand to Delivery teams for discovery and migration support
• Creating PoVs for marketing and branding team for Teradata & Snowflake Assets for migration, ecosystem, comparative study etc. Data mesh offering solution design using various technology stacks.
• Partnership management and negotiation with Teradata team to build capability on Teradata Vantage in migration and internal tools offering under migration umbrella Patient submission for cloud Data warehouse to Teradata Vantage migration framework
• Creation of Discovery, Migration recommendation and TD Object Complexity(sp/views/bteq/macros/udf etc.) Analyzer
• Snowflake Competency & CoE building on Modern Data warehouses area
• Integration of Teradata-to-Teradata Vantage migration capability and validation in house tool (SDTT)
• During RFPs support presales/sales teams including solution architecture/estimates/resource loading plans
• Solution for business cases like Teradata migration/offloading to cloud/low cost environment or Hybrid architecture
• POCs for workload management and simulation for Teradata and Snowflake platform
• Support on demand to Delivery teams for discovery and migration support
• Creating PoVs for marketing and branding team for Teradata & Snowflake Assets for migration, ecosystem, comparative study etc.
BOA (Full Time Employee)- (Since Jan 2016 – May2023) –On – Prem to cloud migration assets
(W-AMS) (Mainframe, Teradata, BIGData)
Role: Senior Analyst /Mainframe Bigdata Production Support Responsibilities
Created tables and views on snowflakes as per the business needs.Using Tivoli scheduler organizations manage and automate the execution of various tasks, jobs, and processes across their IT infrastructure. This software is particularly useful for orchestrating complex workflows, managing dependencies between tasks, and optimizing resource utilization.
Using Korn Shell combination of interactive usability and scripting capabilities, making it a popular choice among Unix users and system administrators for both day-to-day tasks and more complex scenarios.
Using IICS integrations connect various applications, data sources, and services across on-premises and cloud environments. Attending calls with onsite coordinator on daily basis and updated MOM's into the project folders. Worked on IMS front end screens for inquiry as part of testing the online programs.
Implemented warehouse resizing methods and various performance Tuning features to improve the performance of large and complex queries against the large data sets, complex joins and loading/unloading significant amounts of data.
Responsible for designing and implementing robust security policies to ensure that sensitive data is protected from unauthorized access. This may include defining user roles, access controls, and data classification policies.
Created RDS Read replicas for the Database read throughput and used Multi AZ deployment for high availability and resiliency. Used Amazon KMS for storing and managing keys in managing the applications.Databricks provides a fully managed Spark service, allowing users to run Spark applications and perform distributed data processing at scale. It supports various programming languages such as Python, SQL, and R.
Lead technical discussions with senior customer executives that drive decisions and implementation approaches.Design architecture with Microsoft Azure that meets our customers technical, security, and business needs for apps/workloads.ImplementedArchitectsolutionsusingMSAzurePaaSservices/IaaS/SaaS
DevelopAnsibleAutomatedtoolforCI/CDpipelinesforinfrastructureascodedeploymentsHighAvailabilityandDisaster
Recovery implementations. LeverageDatabricksandDataFactorytoreplaceETLprocessinAzuresynapse.
Data in a box offering for sales using Oracle, Starburst, Collibra Solution design for metadata capturing framework from all raw products like: Oracle. DB2, and others as per client environment. Data capturing into Snowflake environment for fraud and customer churn prediction models.
Data lake implementation based on medallion architecture in Snowflake environment for raw, core and
transaction data product.Technical and Business meta data mapping in metadata framework using python, neo4j and Collibra Raw product to core product building using neo4j and python framework
Data virtualization using Starburst Data collection from various sources and placed in Parquet format in data lake under raw, cleans and curatedlayer. Marry that data with business metadata before pushing to target and Collibra repository Data movement into Snowflake data lake and warehouse on medallion architecture principal, for raw core and transaction product Discovery Lead from for Oracle system to Snowflake Migration Migration and history data load strategy and system designing Implementation and enhancement of existing discovery tool and provide the insight about system.
Automated schema and data extraction from oracle system and shipping into S3 Metadata driven ingestion framework design and building the pipeline to load data from S3 to snowflake stage Target system loading into snowflake using stored procedure and snowsql Snowflake environment is composed of a stage, RAW(Pre-Governed), Governed, integrated and semantic data layers to move and aligned data from RAW format to usable format within functional data-marts CDC pipeline implementation using IICS jobs and workflow orchestrations.
Automated data reconciliation using smart data validator (python-based tool), field validation, null value
validation, numeric value validation, average value validation, chunk md5 hash value validations Data transformation from raw to conform layer using DBT Discovery Lead from offshore for teradata 1.6 PB system to Snowflake Migration Implementation and enhancement of existing discovery tool and provide the insight about system Detailed analysis of system on various dimensions like #of database, size of databases, databases size counts,
data objects count, number of ETL jobs. Creation of ecosystem tools and consumption and feeding patterns matrix
Creation of complexity matrix based on time to move and job complexity which help to understand the
weightage of migration consideration Creation of system growth matrix to understand the volumetric and compute- based pattern for future system sizing to accommodate existing and new workload on cloud.
Responsible for designing and implementing robust security policies to ensure that sensitive data is protected
from unauthorized access. This may include defining user roles, access controls, and data classification policies.
Created RDS Read replicas for the Database read throughput and used Multi AZ deployment for high availability
and resiliency. Used Amazon KMS for storing and managing keys in managing the applications.
Databricks provides a fully managed Spark service, allowing users to run Spark applications and perform
distributed data processing at scale. It supports various programming languages such as Python, SQL, and R.
Databricks supports the development and deployment of machine learning models using popular frameworks
like TensorFlow, Porch, and scikit-learn. It offers tools for feature engineering, model training, and model
evaluation. Databricks integrates with various data sources and provides connectors to ingest and process data from databases, data lakes, and streaming sources. It also supports Extract, Transform, Load (ETL) operations for
data preparation.
Design and deploy enterprise-scale, complex cloud infrastructure solutions. Implemented Microservices architecture on Azure Kubernetes Service (AKS). Built ETL pipelines in Data Factory to pull data from sources such as FTP/SFTP folders to data lake storage. Architecture and on-hand Implementation experience with medium to complex on-prem to Azure migrations.
Created multiple notebooks in Databricks (using PySpark, SQL) to consolidate a lot of ETL processes.
Lead technical discussions with senior customer executives that drive decisions and implementation approaches.
Design architecture with Microsoft Azure that meets our customers technical, security, and business needs for
apps/workloads.ImplementedArchitectsolutionsusingMSAzurePaaSservices/IaaS/SaaS
DevelopAnsibleAutomatedtoolforCI/CDpipelinesforinfrastructureascodedeploymentsHighAvailabilityandDisaster
recovery-implementations. LeverageDatabricksandDataFactorytoreplaceETLprocessinAzuresynapse.
Data in a box offering for sales using Oracle, Starburst, Collibra Solution design for metadata capturing framework from all raw products like: Oracle. DB2, and others as per client environment. Data capturing into Snowflake environment for fraud and customer churn prediction models.
Data lake implementation based on medallion architecture in Snowflake environment for raw, core and
transaction data product. Technical and Business meta data mapping in metadata framework using python, neo4j and Collibra Raw product to core product building using neo4j and python framework Data virtualization using Starburst. Data collection from various sources and placed in Parquet format in data lake under raw, cleans and curated layer. Marry that data with business metadata before pushing to target and Collibrarepository
Data movement into Snowflake data lake and warehouse on medallion architecture principal, for raw core and
transaction product Discovery Lead from for Oracle system to Snowflake Migration Migration and history data load strategy and system designing Implementation and enhancement of existing discovery tool and provide the insight about system
Automated schema and data extraction from oracle system and shipping into S3 Metadata driven ingestion framework design and building the pipeline to load data from S3 to snowflake stage Target system loading into snowflake using stored procedure and snowsql Snowflake environment is composed of a stage, RAW(Pre-Governed), Governed, integrated and semantic data layers to move and aligned data from RAW format to usable format within functional data-marts CDC pipeline implementation using IICS jobs and workflow orchestrations Automated data reconciliation using smart data validator (python-based tool), field validation, null value validation, numeric value validation, average value validation, chunk md5 hash value validations.Created tables and views in Teradataas per the business needs. Application monitoring and solving job failures quickly to meet SLA and ensure data availability to users.
Strong knowledge of Data Warehousing concepts and Dimensional modeling like Star Schema and Snowflake Schema.
Extensively used ETL methodologies for supporting data extraction, transformations and loading processing, in a corporate-wide-ETL Solution using Ab Initio/Informatica Power center.
Highly experienced in ETL tool Ab Initio in using GDE Designer. Very good working experience with all the Ab Initio components.
Expertise and well versed with various Ab Initio Transform, Partition, Departition, Dataset and Database components.
Experience with Ab Initio Co-Operating System, application tuning, and debugging strategies. This project is completely involved in platform migration from DB2 system to Teradata new box. We focused mostly on the conversions of SQLs, Stored Procedure, Macros, Function, Tables, and View DDLs from DB2 to Teradata compatibility. Expert in writing UNIX Shell Scripts including Korn Shell Scripts, Bourne Shell Scripts
Objects migration from DEV to PROD is deployed using BAR process. We have designed the scripts to migrate the data from DB2 to Teradata that is needed for testing.
2+ years hands on experience ETL development on Ab Initio Tool data testing.
We have handled all the Data round off issues, Compatibility during the migration process. Defining and implementing data integration strategies for loading and extracting data to and from DB2 to Teradata.
Ensuring the stability of Bank of America's various Teradata production platforms.
Resolving tickets related to data and questions related to W tables, Creating Warehouses, Databases, Schemas, Objects, User Roles as per Migration design and worked with ETL (IICS) to migrate Data flow end to end. Analysis of code to resolve issues from clients and user. Implementing enhancements using Endeavor.
Identifying bad performing queries and suggest for tuning. Performing data reloads.
Handling unplanned outages. Apart from Mainframes, few of the applications load data to target platform via Hadoop. Monitor the jobs via Autosys and check for any failures. For environment related failures, engage Hadoop Admin team to fix error For any repetitive and known issues, fix using documented procedure. For any other failures, engage L3 team.
Application Development:
• Application development for the applications that were supported in the latest project. Understanding the requirements from the clients and work towards prompt delivery of the code. Creating the requirement in Development environment and testing it in UAT
• environment and pushing the code to Production. Providing support to the new changes till the warranty period and turning it over to the production support team.
Accomplishments
Have lead datamover job catch up process during several planned Teradata software/hardware upgrades.
Worked on one of the complex and longest data reload in the bank's history for one of the major applications.
Worked on analyzing and capturing high CPU consuming jobs and suggested them for query tuning process, which eventually saved millions of CPU cycles. Have been the POC for employee engagement activities organizing many cultural and fun events within our LOB. Teradata India Pvt. Ltd. (Aug’2013 – Jan’2016)
Client: ANZ (Australia Newzeland Bank)
Role: Teradata Consultant (Developer)
Key Result Areas:
o Requirement gathering from platform SME. System analysis and preparing design documents. o Data migration and loading in ARC reporting layer. o Preparing Estimates for study & DBTR phase and detail level designs. o Technical support to development and testing team. Syntel Limited(Apr2012 – Aug’2013)
Client: AMEX (American Express)
Role: Teradata Developer
Key Result Areas:
Import/Export of data using teradata utilities like Fastload, Fast-export and Bteq scripts.
Creation/modification of new/existing database objects and their corresponding loading scripts as per requirements.
Involved to prepare the Unit Test and analyze the test results. Involved to given solution for the incidents which raise in production environments.
Responsible for creation of test specification as part of system testing. Coordinating with on-site teams during offshore development .Validation of target data with source data. Initiated project weekly status meetings.
Involved in Performance tuning. Involved in peer-review activities.
Handling management activities like delivering weekly and monthly status reports and monitoring project flow and planning.
Wipro Limited as Software Engineer (Dec 2011 – Mar 2012)
Preparation of KT documents for Internal offshore member.
Creation of db and db objects (tables, views, procedures, functions, triggers) and DBA stuffs in MSSQL 2000/2005.
Designing, execution and scheduling of DTS Packages.hbt
Prepared POC on SSIS, SSAS and SSRS tools as part of project work.
Creation of repository for OLAP database using MIS studio modeling tool.
Designing of attributes tables, subsets, dimension, cubes and data area for cube.
Creation of MCF for installation based on development and also prepares the solution concepts based on requirements. Designing of MIS studio script for import and export of data from OLAP database to SQL database vice versa.
Designing and implementation of Access db, linking of SQL Server database tables form designing and coding using macros and VBA. Define test specification and execution of test cases in Mercury Quality Center 8.2.
Import/Export of data using Teradata utilities like Fastload, Fast-export,Tpump and Bteq scripts. Tech Mahindra as Software Engineer
Clients: AIG (American Insurance Group)
Key Result Areas:
Responsible for solving the issues from EIW customer support on 24x7 On-call issues.
Maintenance and Monitoring of View Point Server, PDCR tables, User administration, Generated IDR usage reports, DDL promotions to production box, Resolved Remedy trouble tickets, Daily, Weekly Data Lab Capacity reports, Monthly CPU & Space utilization reports. Performed application level DBA activities creating tables, indexes, monitored and tuned Teradata BETQ scripts using Teradata Visual Explain utility.
Capacity planning and proactive monitoring to meet performance and growth requirements
Given user admin support to various CMS vendors: Lockheed Martin, Northern Grumman, HP, Thomson Reuters, and Teradata etc. Tuning, System Performance management, Designing, Creating and Maintenance of Data marts, Teradata Objects, Space and Access Management, DR Maintenance. Recommended stats for base tables for future usage. Generated CPU and DISK IO usage capacity reports thru DBC queries on weekly.
Performance and optimized SQL queries.
Produced daily hourly CPU hourly utilization, CPU by source, CPU ETL, Spool usage, Database space reports using PDCR tool kit. Recommend PPI and MLPPI on various tables to tune the queries. Environment: Teradata V2R12, (V2R14), View Point, Teradata Administrator, Informatica, UNIX Personal Details
Name: Ramprasad Nandigama
Designation: Cloud Data Architect (Snowflake/Teradata/Informatica) Passport: Z3594630
US H1B Work Authorization : Until 08/2026
Date OF Birth: 02-25-1984
Current Location: ATLANTA (GA) (Client Location)
EDUCATION
University of JNTU- SMCET Hyderabad - India (M.Tech) Sept 2009 – May 2012 Master of Technology in Computer Science
University of JNTU- SRTIST Nalgonda - India (B.Tech) Sep 2002 – May 2006 Bachelor of Technology
CERTIFICATIONS
Snowflake – Snow Pro Certified.
AWS Solution Architect
Azure Fundamentals.
Azure Developer.
ITIL
Tableau Desktop Specialist
Teradata Vantage Developer
Teradata Vantage Associate
Teradata 14 Certified Professional
Data Stage Certified.