Post Job Free
Sign in

ETL Lead

Location:
Garner, NC
Posted:
August 16, 2024

Contact this candidate

Resume:

Siri Vennela Namburu

ETL Lead

Contact: +1-980-***-****, +1-919-***-****

Email: ad70u4@r.postjobfree.com

Linkedin: https://www.linkedin.com/in/sirinamburu/

PROFESSIONAL SUMMARY:

●7+ years of experience in IT which includes Analysis, Data Governance, Design, Development of Big Data using Hadoop, design and development of web applications using python, Django and data base and data warehousing development using MySQL, PostgreSQL, Oracle and Druid.

●Strong experience, specialized in Big Data ecosystem- Data acquisition, Ingestion, Modeling, Analysis, Integration, and Data processing.

●Experience in building data pipelines using Azure Data factory, Azure databricks and loading data to Azure data Lake, Azure SQL Database, Azure SQL Data warehouse and controlling and granting database access.

●Experience in developing data integration solutions in Microsoft Azure Cloud Platform using services Azure Synapse Analytics, Azure Blob Storage, Azure Data Lake Storage.

●Used Azure Data platform capabilities such as Azure Data Lake, Azure Data Factory, HDInsight, Azure SQL Server, Azure Machine Learning and Power BI to build huge Lambda systems.

●Extensive experience in designing and developing Data Stage jobs to extract data from various sources, transform it according to business rules, and load it into Azure data services.

●Designed and implemented end-to-end data pipelines using Azure Synapse Analytics, ensuring efficient data ingestion, transformation, and loading processes.

●Implemented automated data pipelines using Snowflake tasks and Azure Data Factory pipelines, reducing manual intervention, and improving data delivery efficiency.

●In-depth experience and good knowledge in using Hadoop ecosystem tools like HDFS, MapReduce, YARN, Spark, Kafka, Storm, Hive, Impala, Sqoop, HBase, Flume, Oozie, Ni-Fi and Zookeeper.

●Good Understanding of Hadoop architecture and Hands on experience with Hadoop components such as Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce concepts and HDFS Framework.

●Experienced in Developing pyspark applications using Spark - SQL in Databricks for data extraction, transformation, and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns.

●Proficient in troubleshooting and resolving issues related to Data Stage jobs, data connectivity, and data quality in an Azure environment.

●Traced and catalogue data processes, transformation logic and manual adjustments to identify data governance issues.

●Experienced in using Cloudera Manager for installation and management of single-node and multi- node Hadoop cluster.

●Skilled in performance tuning and optimization techniques to enhance the efficiency of Data Stage jobs and improve overall data processing performance in Azure.

●Experienced in Data load management, importing & exporting data using Sqoop & Flume.

●Installed and configured Oracle databases and worked as a member in Oracle Database Administration team, which includes design of database, planning and backup/recovery strategies.

●Skilled in integrating data from databases, flat files, APIs, and cloud services, using Talend's connectors and adapters, ensuring seamless data flow and integration across different platforms.

●Expertise in building data pipelines using Apache Hop and SSIS, incorporating various data sources, transformations, and loading data into Azure data platforms.

●Experienced in analyzing data using Hive, and custom MapReduce programs in Java.

●Extensive experience in writing complex SQL queries, stored procedures, triggers, and user-defined functions in DB2, enabling efficient data retrieval, manipulation, and data integrity enforcement.

●Good experience of software development in Python (libraries used: libraries- Beautiful Soup, NumPy, SciPy, matplotlib, Python -twitter, Panda’s data frame, network, urllib2, MySQL DB for database connectivity) and IDEs - Spyder, PyCharm and Jupyter.

●Hands on working skills with different file formats like Parquet, ORC, SEQ, AVRO, JSON, RC, CSV, and compression techniques like Snappy, GZip.

●Experienced in designing both time driven, test driven and data driven automated workflows using Oozie.

●Extending Hive core functionality by using custom UDF, UDTF and UDAF in Spark Scala.

●Extensive experience in developing and optimizing data processing pipelines on mainframe systems, efficiently handling large volumes of data while ensuring data integrity and reliability.

●Hands on experienced in writing Spark jobs and Spark streaming APIs using Scala, and Python.

●Experienced in handling Spark SQL, Streaming and complex analytics using pyspark over Cloudera Hadoop YARN.

●Good at implementing Kafka custom encoders for custom input format to load data in partitions.

●Designed and developed the core data pipeline, that involves python and Scala coding and using Kafka and Spark frameworks.

●Developed data validation and cleansing routines to maintain data integrity and accuracy within Salesforce databases.

●Proficient in debugging and troubleshooting COBOL programs, utilizing debugging tools, analyzing program dumps, and applying effective problem-solving techniques to identify and resolve issues.

●Expertise in using Kafka for log aggregation solution with low latency processing and distributed data consumption and widely used Enterprise Integration Patterns (EIPs).

●Worked on Apache Flink to implements the transformation on data streams for filtering, aggregating, update state.

●Experience working with Amazon cloud web services like EMR, Redshift, DynamoDB, Lambda, Athena, S3, RDS, and CloudWatch for efficient processing of big data.

●Experienced with Amazon’s EC2 cluster instances, setup data buckets on S3, and setting up EMR.

●Capable of using AWS utilities such as EMR, S3 and cloud watch to run and monitor Hadoop and Spark jobs on AWS.

●Hands on experience in using other Amazon Web Services like Autoscaling, RedShift, DynamoDB, Route53.

●Have extensive hands-on experience in migrating on premises ETL to GCP using cloud native tools like Cloud Data Proc, Big Query, Google cloud storage, Composer.

●Experienced in providing highly available and fault tolerant applications utilizing orchestration technology on Cloud Platform.

●Experienced in developing PySpark applications using Spark SQL in Databricks for data extraction, transformation, and aggregation from multiple file formats for analyzing and transforming the data to uncover insights.

●Experienced in ETL process, Data Modeling and Mapping, Data Integration, Business Intelligence and Data Analysis, Data Validation and Data Cleaning, Data Verification and Identifying data mismatch.

●Experienced with code versioning and dependency management systems such as Git, Jenkins, Maven.

●Writing code to create single-threaded, multi-threaded or user interface event driven applications, either stand-alone and those which access servers or services.

●Good experience in using Data Modelling techniques to find the results based on SQL and PL/SQL queries.

●Experienced working with different databases, such as Oracle, SQL Server, MySQL and writing stored procedures, functions, joins, and triggers for different Data Models.

●Expertise in creating Dashboards and Alerts using Splunk Enterprise, Tableau and Monitoring using DAGs.

●Skilled in monitoring servers using Cloud watch and using ELK stack Elastic search Logstash.

●Experienced in writing gherkin and Selenium automation test scripts to execute in Cucumber.

●Highly motivated self-starter with good problem-solving, communication and interpersonal skills.

●Good team player, dependable resource, and ability to learn new tools and software quickly as required.

●Good Domain Knowledge on Finance, Healthcare and Ridesharing Industry.

TECHNICAL SKILLS:

Programming Languages

Python, R, Java, Scala, Shell Scripting, SQL, DB2 and COBOL.

Big Data Ecosystem

Spark, Hive, Impala, HBase, SQOOP, Oozie, HOP, Storm, Flume, Kafka, NIFI, Zookeeper, Hadoop MapReduce, HDFS, DBT, Flink

Azure Services

Data Lake, Data Factory, Azure SQL, Synapse Analytics, Azure SQL

AWS Services

EC2, EMR, S3, Lambda, Glue, CloudWatch, RDS

Cloud

Snowflake, GCP and Informatica.

DBMS

SQL Server, MySQL, PL/SQL, Oracle, COBOL, Druid

NoSQL Databases

Cassandra, Dynamo DB

IDE

Eclipse, PyCharm, Anaconda, Visual Studio, WinSCP

Version Control

GitHub, Jenkins, Docker, Terraform, Airflow

Operating Systems

Windows, Unix, Linux, Solaris, CentOS

Frameworks

MVC, Maven, ANT, Splunk, Cucumber.

ETL/BI

Talend, Tableau, Power BI, SSIS, IBM DataStage

Methodologies

System Development Life Cycle (SDLC), Jira, Agile, Waterfall Model

PROFESSIONAL EXPERIENCE

WORK EXPERIENCE:

Meta, Durham- NC, USA Nov 2023 – July 2024

ETL Lead

Responsibilities:

●Designed and implemented ETL processes using python scripts and SQL queries to extract data from multiple sources, transform it according to business rules, and load into the Data warehouse.

●Developed complex SQL queries and stored procedures for data extraction, transformation, and loading tasks.

●Automated data workflows using Python scripts, reducing manual intervention by 30%.

●Optimized ETL jobs for performance improvements, resulting in a 20% reduction in execution time.

●Collaborated with cross-functional teams to gather requirements and ensure data quality and integrity.

●Provided production support and troubleshooting for ETL jobs and data issues.

●created complex ETL packages, developing custom .NET components, and optimizing database performance.

●Led the development of a standalone application using WPF for a client in the financial services sector.

●Designed the application's user interface using XAML and implemented MVVM architecture for better code maintainability.

●Integrated data from SQL Server databases into the WPF application using Entity Framework for data access.

●Implemented custom controls and themes to enhance user experience and usability.

●Conducted user acceptance testing (UAT) and incorporated feedback to refine application functionality.

●Provided technical documentation and user guides for the application.

●Developed Python applications utilizing libraries such as NumPy and Pandas for data analysis and manipulation.

●Integrated Python scripts with SQL databases to automate data extraction, transformation, and loading processes.

Equifax, St. Louis, USA Jan 2022 – Oct 2023

Sr Data Engineer

Responsibilities:

●Consult leadership/stakeholders to share design recommendations and thoughts to identify product and technical requirements, resolve technical problems and suggest Big Data based analytical solutions.

●Designed SSIS Packages to transfer data between servers, load data into database, and archived data file from different DBMS using SQL server and used Druid for indexing.

●Extensive experience with Transact-SQL (T-SQL) for data querying and manipulation within Synapse.

●Proficient in IBM Data Stage, a powerful ETL (Extract, Transform, Load) tool, and experienced in integrating it with Azure services.

●Proficient in utilizing SSIS (SQL Server Integration Services) for designing and developing ETL solutions, data extraction, transformation, and loading from various sources into Azure data services.

●Developed JSON Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data using the SQL Activity.

●Architect & implement medium to large scale BI solutions on Azure using Azure Data Platform services (Azure Data Lake, Data Factory, Data Lake Analytics, Stream Analytics, Azure SQL DW, HDInsight/Databricks, NoSQL DB).

●Proficient in configuring and deploying Apache Hop in Azure environments, to seamlessly integrate with Azure data lake, Azure Data factory and Azure data bricks to move and conform the data from on - premises to cloud to serve the analytical needs of the company.

●Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform, and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool and backwards using PySpark.

●Utilized indexing, distribution strategies, and query plan analysis to enhance query efficiency within Synapse.

●Skilled in configuring and managing Apache Hop and SSIS environments on Azure, ensuring optimal performance, security, and scalability.

●Developed Spark applications using Scala and spark SQL for data extraction, transformation, and aggregation from multiple file formats for analyzing and transforming the data uncover insight into the customer usage patterns and even Responsible for estimating the cluster size, monitoring, and troubleshooting of the Spark Databricks cluster and Ability to apply the spark Data Frame API to complete Data manipulation within spark session.

●Implemented robust data pipelines using Data Stage and Azure Data Factory, ensuring reliable and efficient data movement across different environments.

●Worked on Spark Architecture for performance tuning including spark core, spark SQL, Data Frame, Spark streaming, Driver Node, Worker Node, Stages, Executors and Tasks, Deployment modes, the Execution hierarchy, fault tolerance, and collection.

●Created Azure BLOB and Data Lake storage and loading data into Azure SQL Synapse analytics (DW) and integrated Druid with Azure Synapse.

●Created data governance templates and standards for the data governance organization.

●Developed JSON Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data using the SQL Activity and creating UNIX shell scripts for database connectivity and executing queries in parallel job execution.

●Collected and aggregated large amounts of web log data from different sources such as webservers, mobile and network devices using Apache Flume and stored the data into HDFS for analysis.

●Implemented robust data security measures within Azure Synapse, including encryption at rest and in transit, to ensure compliance with industry standards.

●Implemented Apache Sqoop for efficiently transferring bulk data between Apache Hadoop and relational databases (Oracle) for product level forecast. Extracted the data from Teradata into HDFS using Sqoop.

●Controlling and granting database access and migrating on premise databases to Azure data lake store using Azure Data Factory.

●Managed and lead the implementation of data governance using Microsoft project, power point and multiple training programs.

●Skilled in optimizing PySpark jobs for improved performance and scalability, including efficient partitioning, caching, and leveraging Spark's distributed computing capabilities on Azure.

●Worked on Kafka REST API to collect and load the data on Hadoop file system and used Sqoop to load the data from relational databases which extracts Real time feed using Kafka and Spark Streaming and convert it to RDD and process data in the form of Data Frame and save the data as Parquet format in HDFS.

●Analyzed existing systems and propose improvements in processes and systems for usage of modern scheduling tools like Airflow and migrating the legacy systems into an Enterprise data lake built on Azure Cloud.

●Experienced in troubleshooting and resolving issues related to Data Stage and Azure integration, including connectivity, data compatibility, and performance bottlenecks.

●Instantiated, created, and maintained CI/CD (continuous integration & deployment) pipelines and apply automation to environments and applications with Docker. Worked on various automation tools like GIT, Terraform.

●Created data pipeline package to move data from Blob Storage to MYSQL database and executed MySQL stored procedure using events to load data into tables.

ExperienceFlow, Hyderabad, IN – Lal Path Labs Aug 2020 to Dec 2021

Role: Data Engineer

Responsibilities:

●Designed and setup Enterprise Data Lake to provide support for various uses cases including Analytics, processing, storing, and Reporting of voluminous, rapidly changing data.

●Participated in the Data governance working group sessions to create policies.

●Migrated an existing on-premises application to AWS. Used AWS services like EC2 and S3 for small data sets processing and storage, Experienced in Maintaining the Hadoop cluster on AWS EMR.

●Constructed AWS Data pipelines using various resources in AWS including AWS API Gateway to receives response from AWS lambda and retrieve data from snowflake using lambda function and convert the response into Json format using Database as Snowflake, DynamoDB, AWS Lambda function and AWS S3.

●Implemented data lakes on AWS S3, organizing structured and unstructured data for AI/ML model training and analysis.

●Developed and implemented data acquisition of Jobs using Scala that are implemented using Sqoop, Hive for optimization of MR Jobs to use HDFS efficiently by using various compression mechanisms with the help of Oozie workflow.

●Worked on NIFI that can automate the flow of data and delivered data to every part of our business with smart data pipelines using Stream Set.

●Performed end-to-end Architecture & implementation assessment of various AWS services like Amazon EMR, Redshift.

●Used AWS Glue for the data transformation, validate and data cleansing.

●Extensive experience in developing and maintaining ETL solutions using SSIS, including data extraction, transformation, and loading from various sources into target databases.

●Proficient in utilizing Apache Hop and SSIS to handle data quality, cleansing, and validation tasks, ensuring accuracy and integrity of the data.

●Skilled in deploying machine learning models on AWS infrastructure using services like Amazon Sage Maker, enabling seamless integration of AI/ML capabilities into applications.

●Designed and Developed Spark workflows using Scala for data pull from AWS S3 bucket and Snowflake applying transformations on it.

●Migrated Oracle functions/packages to Postgres objects using PG/SQL.

●Analyzed large and critical datasets using Cloudera, HDFS, MapReduce, Hive, Hive UDF, Sqoop and Spark.

●Used Gitlab version control to manage the source code and integrating Git with Jenkins to support build automation and integrated with Jira to monitor the commits.

●Written Terraform scripts to automate AWS services which include ELB, CloudFront distribution, RDS, EC2, database security groups, Route 53, VPC, Subnets, Security Groups, and S3 Bucket and converted existing AWS infrastructure to AWS Lambda deployed via Terraform and AWS CloudFormation.

●Worked on Snowflake Schemas and Data Warehousing and processed batch and streaming data load pipeline using Snow Pipe and Matillion from data lake Confidential AWS S3 bucket.

●Responsible for maintaining quality reference data in source by performing operations such as cleaning, transformation and ensuring Integrity in a relational environment by working closely with the stakeholders & solution architect.

●Responsible for the design, development, and administration of complex T-SQL queries (DDL / DML), Stored Procedures, Views& functions for transactional and analytical data structures.

●Developed Hive queries for the analysts by loading and transforming large sets of structured, semi structured data using hive. Designed the data models to be used in data intensive AWS Lambda applications which are aimed to do complex analysis creating analytical reports for end-to-end traceability, lineage, definition of key business elements from Aurora.

●Analyzed data lineage processes to identify vulnerable data points, data quality issues.

●Involved in migrating tables from RDBMS into Hive tables using SQOOP and later generated data visualizations using Tableau.

●Collaborated with Data engineers and operation team to implement ETL process, Snowflake models, wrote and optimized SQL queries to perform data extraction to fit the analytical requirements.

●Implemented AWS EC2, Key Pairs, Security Groups, Auto Scaling, ELB, SQS, and SNS using AWS API and exposed as the Restful Web services.

●Involved in converting MapReduce programs into Spark transformations using Spark RDD's on Scala.

●Interfacing with business customers, gathering requirements and creating data sets/data to be used by business users for visualization.

●Developed Kibana Dashboards based on the Logstash data and Integrated different source and target systems into Elasticsearch for near real time log analysis of monitoring End to End transactions.

●Implemented AWS Step Functions to automate and orchestrate the Amazon Sage Maker related tasks such as publishing data to S3, training ML model and deploying it for prediction.

ExperienceFlow, Hyderabad, IN – Nanavati Hospital Jan 2019 to Aug 2020

Role: Big data Developer

Responsibilities:

●Responsible for building scalable distributed data solutions using Hadoop.

●Used Spark for improving performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark Sessions, Spark-SQL, Data Frame, Pair RDD's, Spark YARN.

●Designed and developed POCs in Spark using Scala to compare the performance of Spark with Map Reduce, Hive.

●Involved in creating Hive tables and loading and analyzing data using hive queries.

●Involved in design and development of an application using Hive (UDF).

●Used Hive QL to analyze the partitioned and bucketed data, Executed Hive queries on Parquet tables stored in Hive to perform data analysis to meet the business specification logic.

●Built real time data pipelines by developing Kafka producers and Spark streaming applications for consuming.

●Written Hive queries on the analyzed data for aggregation and reporting.

●Implemented Spark Scripts using python, Spark SQL to access hive tables into spark for faster processing of data.

●Developed end to end data processing pipelines that begin with receiving data using distributed messaging systems Kafka through persistence of data into HBase.

●Worked on a migration project to migrate data from on premises to GCP.

●Build Data pipelines in airflow in GCP for ETL related jobs using different airflow operators by using cloud pub/sub.

●Used cloud shell SDK in GCP to configure the services Data proc, Storage, Big Query.

●Created a Python based web application using Python scripting for data processing, MySQL for the database, and HTML/CSS, matplotlib for data visualization of sales, tracking progress, identifying trends.

●Developed data formatted web applications and deploy the script using client-side scripting using JavaScript.

●Developed various Python scripts to find vulnerabilities with SQL Queries by doing SQL injection, permission checks and performance analysis.

●Used Hive Context to integrate Hive metastore and Spark SQL for optimum performance.

●Developed Sqoop Jobs to load data from RDBMS to external systems like HDFS and HIVE.

●Worked in Agile Iterative sessions to create Hadoop Data Lake for the client.

●Developed Stored Procedures and Functions, Views for the Oracle database PL/SQL.

●Responsible for generating actionable insights from complex data to drive real business results for various application teams and worked in Agile Methodology projects extensively.

ClickLabs PVT Limited, Chandigarh, IN - TransVIP June 2018 to Jan 2019

Data Analyst

Responsibilities:

●Consulted with application development business analysts to translate business requirements into data design requirements used for driving innovative data designs that meets business objectives.

●Involved in information-gathering meetings and JAD sessions to gather business requirements, deliver business requirements document and preliminary logical data model.

●Exported Data into Snowflake by creating Staging Tables to load Data of different files from Amazon S3.

●Compared the data in a leaf level process from various databases when data transformation or data loading takes place. I need to analyze and investigate the data quality when these types of loads are done.

●As a part of Data Migration, wrote many SQL Scripts for Mismatch of data and worked on loading the history data from Teradata SQL to snowflake.

●Designed and developed complex ETL workflows using Informatica PowerCenter, ensuring efficient data extraction, transformation, and loading processes.

●Developed SQL scripts to Upload, Retrieve, Manipulate, and handle sensitive data in Teradata, SQL Server Management Studio and Snowflake Databases for the Project.

●Used Git, GitHub, and Amazon EC2 and deployment using AWS and Used extracted data for analysis and carried out various mathematical operations for calculation purpose using python library - NumPy, SciPy.

●Created action filters, parameters, and calculated sets for preparing dashboards and worksheets in Tableau.

●Created Tableau scorecards, dashboards using stack bars, bar graphs, scattered plots, geographical maps, Gantt charts using show me functionality.

●Incorporated predictive modelling (rule engine) to evaluate the Customer/Seller health score using python scripts, performed computations, and integrated with the Tableau and Power BI.

●Analyzed marketing campaigns from various perspectives including CTR, conversion rates, seasonal/geographical trends, search queries, landing page, conversion funnel, quality score, competitors, distribution channel, etc. to achieve maximum ROI for clients.

●Cleansing, mapping, and transforming data, create the job stream, add, and delete the components to the job stream on data manager based on the requirement.

●Developed Teradata SQL scripts using RANK functions to improve the query performance while pulling the data from large tables.

●Used Normalization methods up to 3NF and De-normalization techniques for effective performance in OLTP and OLAP systems. Generated DDL scripts using Forward Engineering technique to create objects and deploy them into the database.

●Used Star Schema methodologies in building and designing the logical data model into Dimensional Models extensively.

●Utilized tools such as Salesforce Data Integration, Salesforce Data Loader, and custom ETL processes to extract, transform, and load data from various sources into Salesforce databases.

●Designed and deployed reports with Drill Down, Drill Through and Drop-down menu option and Parameterized and Linked reports using Tableau.

●Worked on Informatica Power Center tools – Designer, Repository Manager, Workflow Manager and Workflow Monitor. Parsed high-level design to simple ETL coding and mapping standards.

●Worked with data compliance teams, Data governance team to maintain data models, Metadata, Data Dictionaries; define source fields and its definitions.

●Conducted Statistical Analysis to validate data and interpretations using Python and R, as well as presented Research findings, status reports and assisted with collecting user feedback to improve the processes and tools.

●Reported and created dashboards for Global Services & Technical Services using Oracle BI, and Excel. Deployed Excel VLOOKUP, PivotTable, and Access Query functionalities to research data issues.

ClickLabs PVT Limited, Chandigarh, IN - Jugnoo Jan 2017 – June 18

Python Developer

Responsibilities:

●Worked on comprehending and examining the client business requirements.

●Used Django frameworks and python to build dynamic webpages.

●Developed tools for monitoring and notification using python.

●Enhanced the application by using HTML and JavaScript for design and development.

●Designed and developed user interface using front-end technologies like jQuery, AngularJS and TypeScript.

●Designing and developing of new programs with COBOL and DB2.

●Used data structures like directories, tuples, object-oriented class-based inheritance features for making complex algorithms of networks.

●Created PHP/MySQL back-end for data entry from Flash and worked in tandem with the Flash developer to obtain the correct data through query string.

●Involved in designing database Model, API’s, views using python to build an interactive web-based solution.

●Generated python Django forms to record data of online users.

●Coded JavaScript for page functionality and Pop up Screens.

●Implemented a module to connect and view the status of an Apache Cassandra instance using python.

●Developed MVC prototype replacement of current product with Django.

●Improved the Data Security and generated report efficiently by caching and reusing data.

●Managed datasets using Panda data frames and MYSQL queried the databases queries using python-MySQL connector and retrieved information using MySQL DB.

●Recorded the online user’s data using python Django forms and implemented test case using pytest.

●Developed the application using the test-driven methodology and designed the unit tests using python unit test framework.

●Developed the project into Heroku using Git version Control system, Maintained and updated the application in accordance with client’s requirement.

●Implemented unit tests using pytest and mock objects for improved code quality and maintainability for testing and debugging purpose.

●Developed BDD tests using Cucumber by writing behaviors and step definitions. Developed required Selenium support code in python for Cucumber.



Contact this candidate