SAMYUKTHA RAVULA
(Azure Data Engineer)
Phone: 682-***-**** Email: ************@*****.***
LinkedIn ID: www.linkedin.com/in/samyuktha-ravula
PROFESSIONAL SUMMARY
Around 9+ years of professional IT experience over 5+ years in Data Engineering, and 4 years in Datawarehouse.
Experienced data professional with a strong background in end-to-end management of ETL data pipelines, ensuring scalability and smooth operations.
Proficient in optimizing query techniques and indexing strategies to enhance data fetching efficiency.
Skilled in utilizing SQL queries, including DDL, DML, and various database objects, for data manipulation and retrieval.
Expertise in integrating on-premises and cloud-based data sources using Azure Data Factory, applying transformations, and loading data into Snowflake.
Strong knowledge of data warehousing techniques, including data cleansing, Slowly Changing Dimension handling, surrogate key assignment, and changing data capture for Snowflake modeling.
Experienced in designing and implementing scalable data ingestion pipelines using tools such as Apache Kafka, Apache Flume, and Apache Nifi.
Proficient in developing and maintaining ETL/ELT workflows using technologies like Apache Spark, Apache Beam, or Apache Airflow for efficient data extraction, transformation, and loading processes.
Skilled in implementing data quality checks and cleansing techniques to ensure data accuracy and integrity throughout the pipeline.
Experienced in building and optimizing data models and schemas using technologies like Apache Hive, Apache HBase, or Snowflake for efficient data storage and retrieval for analytics and reporting.
Strong proficiency in developing ELT/ETL pipelines using Python and Snowflake SQL.
Skilled in creating ETL transformations and validations using Spark-SQL/Spark Data Frames with Azure Databricks and Azure Data Factory.
Collaborative team members, working closely with Azure Logic Apps administrators and DevOps engineers to monitor and resolve issues related to process automation and data processing pipelines.
Experienced in optimizing code for Azure Functions to extract, transform, and load data from diverse sources.
Strong experience in designing, building, and maintaining data integration programs within Hadoop and RDBMS environments.
Proficient in implementing CI/CD frameworks for data pipelines using tools like Jenkins, ensuring efficient automation and deployment.
I am skilled in executing Hive scripts through Hive on Spark and Spark SQL to address various data processing needs.
Collaborative team member, ensuring data integrity and stable data pipelines while collaborating on ETL tasks.
Strong experience in utilizing Kafka, Spark Streaming, and Hive to process streaming data, developing robust data pipelines for ingestion, transformation, and analysis.
Proficient in utilizing Spark Core and Spark SQL scripts using Scala to accelerate data processing capabilities.
Experienced in utilizing JIRA for project reporting, task management, and ensuring efficient project execution within Agile methodologies.
Actively participated in Agile ceremonies, including daily stand-ups and PI Planning, demonstrating effective project management skills.
TECHNICAL SKILLS
Azure Services
Azure data Factory, Airflow, Azure Data Bricks, Logic Apps, Functional App, Snowflake, Azure DevOps
Big Data Technologies
MapReduce, Hive, Python, PySpark, Scala, Kafka, Spark streaming, Oozie, Sqoop, Zookeeper
Hadoop Distribution
Cloudera, Horton Works
Languages
Java, SQL, PL/SQL, Python, HiveQL, Scala.
Operating Systems
Windows (XP/7/8/10), UNIX, LINUX, UBUNTU, CENTOS.
Build Automation tools
Ant, Maven
Version Control
GIT, GitHub.
IDE & Build Tools, Design
Eclipse, Visual Studio.
Databases
MS SQL Server 2016/2014/2012, Azure SQL DB, Azure Synapse. MS Excel, MS Access, Oracle 11g/12c, Cosmos DB
EDUCATION
Bachelor's in computer science from JNTU University, India.
WORK EXPERIENCE
Role: Azure Snowflake Data Engineer Nov 2023 – Till
Client: Optum, Eden Prairie, MN (Remote)
Responsibilities:
Involved in the migration of SQL databases to various Azure services such as Azure Data Lake, Azure Data Lake Analytics, Azure SQL Database, Data Bricks, and Azure SQL Data Warehouse and managed and granted database access, to oversee the migration of on-premises databases to storage accounts using Azure Data Factory.
Developed Spark applications using Spark/Pyspark - SQL in Databricks to extract, transform, and aggregate data from various file formats for in-depth analysis and uncover customer insights.
Ingested all real-time streaming data using Apache Kafka and Azure Event Hubs, Event Grid to Azure Storage Accounts, and utilized this data for creating pipelines.
Developed Kafka producers for real-time JSON streaming to Kafka topics, processed them with Spark Streaming, and inserted them into Synapse SQL.
Used Azure Data Lake Storage Gen2 for storing Excel and Parquet files, effectively retrieving user data using Blob API.
Designed and implemented end-to-end ETL pipelines in Azure Data Factory to ingest, process, and transform structured and unstructured data into Snowflake, enabling advanced analytics and reporting.
Automated complex data transformations and workflows using Azure Databricks with PySpark and Scala, significantly improving processing efficiency and reducing execution times.
Set up Jenkins Agents (slaves) on Kubernetes for distributed builds.
Integrated SonarQube with Jenkins for continuous code quality checks.
Scheduled builds and test cases using Jenkins and JUnit Reporting plugins.
Built scalable Star and Snowflake schemas in Snowflake, optimizing data warehouse design to support high-performance analytics and BI tools.
Configured GitHub Actions to automate CI/CD pipelines, ensuring seamless testing, deployment, and monitoring of Azure Data Factory and Databricks workflows.
Developed robust SQL scripts and stored procedures to perform data transformations, aggregations, and loading processes for Snowflake and other relational databases.
Automated the generation of Parquet and JSON files through ADF pipelines, incorporating advanced transformations and ensuring data quality for downstream processing.
Integrated pipelines with Service Bus, Pulsar, and ChartPoint for reliable message-based communication and timely processing of newly generated files.
Leveraged Veto to monitor Azure Data Factory pipelines, tracking completed and failed nodes, execution times, and ensuring seamless pipeline execution.
Developed Canvas validation workflows to ensure incoming files met predefined criteria such as file type, naming conventions, and delimiters, improving pipeline reliability.
Created interactive Power BI dashboards to monitor ADF pipeline performance, execution statuses, and activity logs, providing actionable insights and trend analysis.
Utilized Azure Monitor and Log Analytics to aggregate and analyze pipeline logs, integrating with Power BI for centralized and real-time operational monitoring.
Implemented Kafka producers for real-time JSON streaming to Kafka topics, processed data with Spark Streaming, and stored results in Synapse SQL for downstream analytics.
Automated Azure HDInsight cluster creation with PowerShell scripting, optimizing resources for big data processing workflows.
Collaborated with cross-functional teams to define data requirements and implemented solutions using Scala, PySpark, SQL, and Snowflake, ensuring alignment with business objectives.
Developed custom alerts and notifications integrated with chase logs to proactively notify stakeholders of pipeline successes, failures, or delays.
Optimized Snowflake performance with partitioning, clustering, and caching strategies, reducing query execution times for large-scale datasets.
Streamlined pipeline orchestration by integrating ADF with GitHub for version control, enabling collaboration and tracking of pipeline changes.
Built and scheduled real-time data pipelines in Snowflake using Streams and Tasks, ensuring timely updates to analytical datasets for business insights.
Implemented data ingestion workflows using Apache Kafka, Azure Event Hubs, and Event Grid, enabling seamless processing of real-time and batch data into Azure Storage.
Environment: Azure Databricks, Data Factory, Azure Synapse Analytics, Azure Event Hubs, Apache Kafka, Apache Event Grid, HDInsight, Slowly Changing Dimensions, Logic Apps, Function App, Snowflake, MS SQL, Oracle, HDFS, MapReduce, YARN, Spark, Hive, SQL, Python, Scala, Pyspark, Spark Performance, data integration, data modeling, data pipelines, production support, DAGs, Apache Airflow, Shell scripting, GIT, JIRA, Jenkins, Kafka, ADF Pipeline and Power BI.
Role: Azure Data Engineer Sep 2022 – Oct 2023
Client: Walmart, Dallas, TX
Responsibilities:
Managed end-to-end operations of ETL data pipelines, ensuring scalability and smooth functioning.
Implemented optimized query techniques and indexing strategies to enhance data fetching efficiency.
Utilized SQL queries for data manipulation and retrieval, including DDL, DML, and various database objects (indexes, triggers, views, stored procedures, functions, and packages).
Integrated on-premises (MySQL, Cassandra) and cloud-based (Blob storage, Azure SQL DB) data using Azure Data Factory, applying transformations and loading data into Snowflake.
Orchestrated seamless data movement into SQL databases using Data Factory's data pipelines.
Developed data warehousing techniques, data cleansing, Slowly Changing Dimension (SCD) handling, surrogate key assignment, and change data capture for Snowflake modeling.
Designed and implemented scalable data ingestion pipelines using tools such as Apache Kafka, Apache Flume, and Apache Nifi to collect and process large volumes of data from various sources.
Developed and maintained ETL/ELT workflows using technologies like Apache Spark, Apache Beam, or Apache Airflow, enabling efficient data extraction, transformation, and loading processes.
Implemented data quality checks and data cleansing techniques to ensure the accuracy and integrity of the data throughout the pipeline.
Built and optimized data models and schemas using technologies like Apache Hive, Apache HBase, or Snowflake to support efficient data storage and retrieval for analytics and reporting purposes.
Developed ELT/ETL pipelines using Python and Snowflake Snow SQL to facilitate data movement to and from Snowflake data store.
Created ETL transformations and validations using Spark-SQL/Spark Data Frames with Azure Databricks and Azure Data Factory.
Collaborated with Azure Logic Apps administrators to monitor and resolve issues related to process automation and data processing pipelines.
Optimized code for Azure Functions to extract, transform, and load data from diverse sources, including databases, APIs, and file systems.
Designed, built, and maintained data integration programs within Hadoop and RDBMS environments.
Implemented a CI/CD framework for data pipelines using the Jenkins tool, enabling efficient automation and deployment.
Collaborated with DevOps engineers to establish automated CI/CD and test-driven development pipelines using Azure, aligning with client requirements.
Demonstrated proficiency in scripting languages like Python and Scala for efficient data processing.
Executed Hive scripts through Hive on Spark and SparkSQL to address diverse data processing needs.
Collaborated on ETL tasks, ensuring data integrity and maintaining stable data pipelines.
Utilized Kafka, Spark Streaming, and Hive to process streaming data, developing a robust data pipeline for ingestion, transformation, and analysis.
Utilized Spark Core and Spark SQL scripts using Scala to accelerate data processing capabilities.
Utilized JIRA for project reporting, creating subtasks for development, QA, and partner validation.
Actively participated in Agile ceremonies, including daily stand-ups and internationally coordinated PI Planning, ensuring efficient project management and execution.
Developed and implemented data integration solutions for Salesforce, enabling seamless data flow between various systems and Salesforce platform.
Proficient in AWS and Ab Initio, experienced in building scalable data engineering solutions.
Skilled in Ab Initio ETL tool, with expertise in complex data integration and data quality processes.
Developed end-to-end data pipelines using AWS and Ab Initio, improving data accuracy and performance.
As an Azure Data Engineer, such as data modeling, data warehousing, ETL (Extract, Transform, Load) processes, or data visualization, be sure to mention them.
Proficient in designing and maintaining databases, ensuring referential integrity and utilizing ER diagrams.
Experienced in data warehousing, including developing and maintaining data warehouses.
Informatica offers powerful data transformation capabilities, allowing users to manipulate and enrich data during the ETL process. It supports a wide range of data sources, including databases, cloud applications, flat files, web services, and more. It enables users to work with diverse data types and formats.
Informatica supports cloud-based data integration, enabling organizations to seamlessly integrate data between on-premises systems and cloud-based applications.
cloud-based analytics to monitor inventory levels, forecast demand, and optimize stock replenishment, reducing carrying costs and minimizing stockouts.
Leverage cloud analytics to perform sentiment analysis on customer reviews and social media data, gaining insights into customer feedback and sentiment towards products and services.
SSIS is tightly integrated with SQL Server, allowing users to extract data from SQL Server databases, perform transformations, and load data back into SQL Server or other destinations.
Experienced in integrating Cassandra with various applications and frameworks, enabling seamless data storage and retrieval for web applications and other data-driven systems.
Collaborate with development and operations teams to understand application requirements and build efficient CI/CD workflows.
Troubleshoot and resolve CI/CD pipeline issues and optimize pipelines for speed and reliability.
Set up and configure Kubernetes clusters for different environments (development, staging, production) with proper resource allocation and security measures.
Monitor and maintain Kubernetes cluster health, performance, and availability.
Create Docker images for applications, optimizing the size and minimizing vulnerabilities.
Manage Docker registries for storing and versioning Docker images.
Coordinate between development, testing, and operations teams to manage release milestones and updates.
Create and execute automated tests, including unit, integration, and end-to-end tests within the CI/CD pipeline. Retail projects can leverage Snowflake to deliver personalized shopping experiences to customers.
Environment: Azure Databricks, Data Factory, Logic Apps, Functional App, Snowflake, MS SQL, Oracle, Cloud Analytics, HDFS, Snowflake, MapReduce, report Development, YARN, Spark, Hive, SQL, Python, SSIS, Extraction, Transformation and Loading, Scala, PySpark, Spark Performance, Relational database and SQL language, Azure Cosmos, PowerShell, data integration, data modeling, data pipelines, production support, Abinitio, Shell scripting, GIT, JIRA, Jenkins, Kafka, ADF Pipeline, Power Bi, Azure, Cassandra, Devops, Informatica, Datalake
Role: Azure Data Engineer Nov 2020 – Aug 2022
Client: State of GA, GA
Responsibilities:
Enhanced Spark performance by optimizing data processing algorithms, leveraging techniques such as partitioning, caching, and broadcast variables.
Implemented efficient data integration solutions to seamlessly ingest and integrate data from diverse sources, including databases, APIs, and file systems, using tools like Apache Kafka, Apache NiFi, and Azure Data Factory.
Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in Azure Databricks.
Worked on Microsoft Azure services like HDInsight Clusters, BLOB, Data Factory, and Logic Apps and also done POC on Azure Data Bricks.
Perform ETL using Azure Data Bricks, Migrated on-premises Oracle ETL process to Azure Synapse analytics.
Worked on Migrating SQL database to Azure data lake, Azure data lake analytics, Azure SQL Database, Data Bricks, and Azure SQL Data warehouse
controlling and granting database access and Migrating on-premise databases to Azure Data Lake Store using Azure Data Factory
Proficient in analyzing pharmaceutical data to provide valuable insights and support decision-making. EMR Successfully managed projects, delivering on time and within budget while mitigating risks.
EMR Committed to upholding industry standards and ethical practices, adhering to Good Clinical Practice guidelines. EMR is used for training and knowledge transfer to foster a skilled workforce.
Data transfer using Azure Synapse and Polybase.
Deployed and optimized Python web applications to Azure DevOps CI/CD to focus on development.
Developed enterprise-level solution using batch processing and streaming framework (using Spark Streaming, Apache Kafka.
Designed and implemented robust data models and schemas to support efficient data storage, retrieval, and analysis using technologies like Apache Hive, Apache Parquet, or Snowflake.
Skilled in working with BSON (Binary JSON) documents, which enable seamless integration with various programming languages and frameworks.
Proficient in using geospatial indexes in MongoDB to perform location-based queries and geospatial data analysis.
Knowledgeable in monitoring MongoDB performance using tools like MongoDB Compass and Mongostat, and adept at optimizing database performance through proper indexing and query optimization.
Developed and maintained end-to-end data pipelines using Apache Spark, Apache Airflow, or Azure Data Factory, ensuring
Lambda functions Implemented event-driven architecture to trigger functions based on specific events, enhancing responsiveness. Integrated Lambda with other AWS services for seamless data flow and interactions. Implemented robust error handling and logging for data integrity and effective debugging.
Lambda leveraged automatic scaling for optimal resource utilization and cost efficiency
and troubleshooting for data pipelines, identifying and resolving performance bottlenecks, data quality issues, and system failures.
Processed the schema-oriented and non-schema-oriented data using Scala and Spark.
Created Partitions and buckets based on State to further process using Bucket-based Hive joins.
Designed efficient data pipelines with Glue ETL jobs to cleanse, transform, and load data from diverse sources, ensuring accuracy for analysis. Leveraged Glue's serverless architecture for cost-effective operations, dynamically scaling resources based on demand.
Automated data cataloging using Glue Data Catalog facilitated seamless data discovery and collaboration. Integrated Glue with AWS services like S3, Redshift, and Athena to enhance data accessibility and enable comprehensive analysis. Implemented robust error handling, monitoring, and security measures to maintain data integrity and compliance. Developed a data pipeline using Kafka, Spark, and Hive to ingest, transform, and analyze data.
Worked on RDDs & Data frames (Spark SQL) using PySpark for analyzing and processing the data.
Implemented Spark Scripts using Scala, and Spark SQL to access hive tables into Spark for faster processing of data
Used Spark Streaming to divide streaming data into batches as input to the Spark engine for batch processing.
Implemented CICD pipelines to build and deploy the projects in the Hadoop environment.
Using JIRA to manage the issues/project workflow.
Monitor test results and collaborate with developers to address issues and failures.
Perform vulnerability assessments and penetration testing on container images and applications.
Collaborate with DevOps and development teams to remediate security issues and enforce security policies.
Develop deployment automation scripts and templates to enable consistent and repeatable deployments.
Implement auto-scaling configurations to dynamically adjust resource allocation based on application demand.
Configure monitoring tools (e.g., Prometheus, Grafana) to track CI/CD pipeline performance and health.
Create dashboards and alerts to detect and respond to potential issues proactively.
Proficient in utilizing Power BI's advanced features such as DAX (Data Analysis Expressions) for complex calculations and measures.
Dynamo DB leveraged its scalability and low-latency performance to efficiently handle large volumes of data for smooth retrieval and analysis.
Dynamo DB ensured data security and compliance through robust access controls and encryption mechanisms.
Dynamo DB designed optimal data models, optimizing query performance and minimizing costs
Ability to customize and format visualizations in Power BI to meet specific business requirements and enhance user experience.
S3 Implemented versioning and backup features for data durability and recovery.
S3 automated data archiving with lifecycle policies, optimizing storage costs.
S3Integrated with AWS services like Glue, Athena, and Redshift for data analysis and informed decision-making.
Demonstrated expertise in transforming raw data into meaningful visual representations using Power Query Editor and data cleansing techniques.
Informatica supports cloud-based data integration, enabling organizations to seamlessly integrate data between on-premises systems and cloud-based applications.
Worked on Spark using Python (PySpark) and Spark SQL for faster testing and processing of data.
Environment: Azure Databricks, Data Factory, Data Mart, DB, Functional App, Glue, Snowflake, MS SQL, Oracle, HDFS, EMR, MapReduce,S3 Spark, Hive, SQL, Python, Scala, PySpark, Spark Performance, data integration, data modeling, Data visualization, Lambda Functions, data pipelines, production support, Shell scripting, GIT, JIRA, Jenkins, Kafka, ADF Pipeline, Power Bi, portfolio, PL/SQL, SSIS, Informatica,NoSQL.
Role: Data Engineer Jul 2019 – Oct 2020
Client: CGI, Dallas, TX
Responsibilities:
Designed and set up Enterprise Data Lake to provide support for various use cases including Analytics, processing, storing, and Reporting of voluminous, rapidly changing data.
Responsible for maintaining quality reference data in source by performing operations such as cleaning, transformation, and ensuring Integrity in a relational environment by working closely with the stakeholders & solution architect.
Worked on creating tabular models on Azure analytic services for meeting business reporting requirements.
Data Ingestion to one or more cloud Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and cloud migration processing the data in Azure Databricks.
Creating pipelines, data flows, and complex data transformations and manipulations using ADF and PySpark with Databricks.
Working with Azure BLOB and Data Lake storage and loading data into Azure SQL Synapse analytics (DW).
Developed Python, PySpark, and Bash script logs to Transform, and Load data across on-premise and cloud platforms.
Worked on Apache Spark Utilizing the Spark, SQL, and Streaming components to support the intraday
and real-time data processing.
Set up and worked on Kerberos authentication principals to establish secure network communication on cluster and testing of HDFS, Hive, Pig, and Map Reduce to access cluster for new users.
Used Spark SQL for Scala & amp, a Python interface that automatically converts RDD case classes to schema RDD.
Import the data from different sources like HDFS/HBase into Spark RDD and perform computations using PySpark to generate the output response.
Implementing different performance optimization techniques such as using distributed cache for small datasets, partitioning, and bucketing in the hive, doing map side joins, etc.
Good knowledge of Spark platform parameters like memory, cores, and executors
Developed a reusable framework to be leveraged for future migrations that automate ETL from RDBMS systems to the Data Lake utilizing Spark Data Sources and Hive data objects.
Importing & exporting databases using SQL Server integration services (SSIS) and Data Transformation Services (DTS Packages).
Extensive experience with Big Data technologies and DevOps, including Hadoop, Kafka, RabbitMQ, MySQL, and cloud deployment.
Experience with database management systems such as Microsoft SQL Server, Oracle, or MySQL.
Familiarity with Snap Logic, a data integration tool, for seamless data flow between systems.
Hands-on experience in designing and implementing data integration workflows using Snap Logic’s visual interface.
Strong troubleshooting and problem-solving skills in data integration.
Proficient in Agile development methodologies, delivering projects with multiple priorities and minimal supervision.
Strong technical proficiency in programming languages (Python, Java, Scala), API development (REST, SOAP), scripting (Perl, Shell), and infrastructure automation (Docker, Kubernetes).
Environment: Azure, Azure Data Factory, Databricks, PySpark, RabbitMQ, RDBMS, Jenkins, Docker, SnapLogic, Kubernetes Python, Apache Spark, HBase, HIVE, SQOOP, Snowflake, Python, SSRS, Tableau.
Role: Big Data Developer May 2018 – Jun 2019
Client: Rockwell Collins, Chicago, IL
Responsibilities:
Designed and developed applications on the data lake to transform the data according to business users to perform analytics.
In-depth understanding/ knowledge of Hadoop architecture and various components such as HDFS, application manager, node master, resource manager name node, data node, and map-reduce concepts.
Involved in developing a Map Reduce framework that filters bad and unnecessary records.
Involved heavily in setting up the CI/CD pipeline using Jenkins, Maven, Nexus, GitHub, and AWS.
Developed data pipeline using flume, SQOOP, pig and map reduce to ingest customer behavioral data and purchase histories into HDFS for analysis.
Used Spark-SQL to load JSON data and create schema RDD and loaded it into Hive tables handled structured data using Spark SQL
Used HIVE to do transformations, event joins, and some pre-aggregations before storing the data onto HDFS.
The Hive tables created as per requirement were internal or external tables defined with appropriate static and dynamic partitions, intended for efficiency.
Implemented the workflows using the Apache OOZIE framework to automate tasks.
Developing design documents, considering all possible approaches, and identifying the best of them.
Written Map Reduce code that will take input as log files parse them and structure them in tabular format to facilitate effective querying on the log data.
Developed scripts and automated data management from end to end and synced up b/w all the Clusters.
Implemented Fair schedulers on the Job Tracker to share the resources of the cluster for the Map Reduce jobs given by the users.
Environment: Cloudera CDH 3/4, Hadoop, HDFS, MapReduce, Hive, Oozie, Pig, Shell Scripting, MySQL.
Role: Data warehouse Developer Jul 2015 – Jul 2017
Client: Hudda InfoTech Private Limited Hyderabad, India
Responsibilities:
Create and maintain a database for Server Inventory, Performance Inventory.
Worked in Agile Scrum Methodology with daily stand-up meetings, great knowledge of working with Visual SourceSafe for Visual Studio 2010 and tracking the projects using Trello.
Generated Drill through and drill-down reports with Drop down menu option, sorting the data, and defining subtotals in Power BI.
Used Data warehouse for developing Data Mart which for feeding downstream reports, development of User Access Tool using which users can create ad-hoc reports and run queries to analyze data in the proposed Cube.
Deployed the SSIS Packages and created jobs for efficient running of the packages.
Experienced in Infrastructure as Code (IAC) using Terraform to deploy and manage multi-cloud environments (AWS, Azure, GCP).
Emphasize security, compliance, and troubleshooting. Skilled in integrating Terraform into CI/CD pipelines and optimizing cloud resources for cost-effective solutions.
Expertise in creating ETL packages using SSIS to extract data from heterogeneous databases and then transform and load them into the data mart.
Involved in creating SSIS jobs to automate the reports generation, and cube refresh packages.
Great Expertise in Deploying SSIS Packages to Production and using different types of Package configurations to export various package properties to make the package environment independent.
Experienced with SQL Server Reporting Services (SSRS) to author, manage, and deliver both paper-based and interactive Web-based reports.
Developed stored procedures and triggers to facilitate consistent data entry into the database.
Shared data outside using Snowflake to quickly set up to share data without transferring or developing pipelines.
Environment: Windows server, MS SQL Server 2014, SSIS, SSAS, SSRS, SQL Profiler, Power BI, C#, Performance Point Server, MS Office, SharePoint, Terraform.