Post Job Free
Sign in

Devops Engineer Aws Cloud

Location:
Wheaton, IL
Posted:
October 30, 2024

Contact this candidate

Resume:

Mohammad Khan

Email ID: ad9tfi@r.postjobfree.com

PH: +1-331-****-***

LinkedIn URL: https://www.linkedin.com/in/mohammad-khan-b87b17202/

Sr. DevOps Engineer

Professional Summary: Over 9+ years of diverse experience spanning Linux Administration, Configuration Management, Continuous Integration, Deployment, Release Management, and Cloud Implementations. Proficient in managing multi-platform environments (Windows-Unix) using TFS, CVS, and Subversion. Skilled in administering Production, Development, and Test environments encompassing Windows, Ubuntu, RedHat Linux, SUSE Linux, Centos, and Solaris servers. Extensive hands-on expertise in AWS cloud services such as VPC, EC2, S3, RDS, Redshift, and more. Proficient in building scalable distributed data systems leveraging Hadoop ecosystem on AWS EMR. Successfully migrated on-premises database structures to AWS Redshift data warehouse using AWS Glue. Experienced in containerizing applications with Docker and deploying them on Kubernetes clusters managed by Amazon EKS. Proven history in managing infrastructure resources using Terraform, handling databases including SQL, MySQL, and MongoDB, and architecting solutions on AWS and Office 365. Adept in CI/CD frameworks, utilizing tools like Jenkins, Maven, and Ant, with a solid foundation in SCM tool like GIT. Skilled in implementing CI/CD using Docker, Kubernetes, EKS, Jenkins, and Git. Proficient in utilizing monitoring tools like Elasticsearch, Kibana, and Nagios for logging and monitoring. Effective communication and critical thinking skills, with a proactive approach towards learning and adapting to new technologies.

Technical Skills

Tools/Platforms

Operating System

Solaris 7-10, Windows (2K, XP, 2003, NT, 2008, 2012),

Red Hat Linux (ES 4-6), AIX 7, HP-UX 11.23

Version Controlling

Clearcase, GIT

Scripting Tools

Shell, Perl, Python, Groovy

SBuild/CI Tools

ANT, Maven, Jenkins, Chef, Puppet, Nexus, Sonar

Container Tools

Docker, Kubernetes, OpenShift

E-mail Servers

Sendmail, Postfix

Monitoring

Nagios, Grafana, Splunk, Prometheus

Networking

DNS, DHCP, TCP/IP, SMTP, LDAP, SAMBA

Database

Oracle (10g/9i), SQL Server, MySQL, Amazon Aurora

Cloud

AWS (EC2, VPC, EBS, AMI, SNS, RDS, CloudWatch, Terraform, Packer, S3, IAM)

Bug Tracking Tools

ServiceNow, JIRA

Virtualization Tools

VMware vSphere, V-Center

Applications

Apache, Tomcat, Weblogic

Education: Bachelor’s in computer science from Osmania University

Professional Summary:

At&t, Topeka, KS October 2020 to till date.

ROLE: AWS DevOps Engineer

·Experience in Converting existing AWS Infrastructure to Server less architecture (AWS Lambda, Kinesis), deploying via Terraform and AWS Cloud Formation templates.

·Provided R&D consulting to solve bandwidth throughput issues around RedHat for their custom big data SAS analytics IaaS, PaaS, and SaaS platform.

·On demand, secure EMR launcher with custom spark submit steps using S3 Event, SNS, KMS and Lambda function.

·Utilize Amazon Glue to build Data catalog and invoke with Aurora DB

·Utilized Ansible and Jenkins to automate the provisioning of our identity management solution which is used to implement Single Sign On for AWS. EKS authentication integrated with SSO as well.

·Converted all Hadoop jobs to run in EMR by configuring the cluster according to the data size.

·Managing Client’s AWS Cloud based Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) environments.

·Utilized Cloud Formation and Puppet by creating DevOps processes for consistent and reliable deployment methodology.

·Performed code line GIT pushes (releases) to production and customer servers and developed and improved GIT push scripts.

·Assisting in Migrating On-Premises Applications to AWS Cloud.

·Configuring and deploying instances on AWS environment and Data centers, also familiar with EC2, Cloud watch, Elastic Search and managing security groups on AWS.

·Worked in an IAAS environment called Terraform, to manage application infrastructures such as storage and networking.

·Created Image templating for AWS Elastic Kubernetes services (EKS) cluster and Docker base image provisioned using Terraform.

·Used Bash and Python included Boto3 to supplement automation provided by Ansible and Terraform for tasks such as encrypting EBS volumes backing AMIs and scheduling Lambda functions for routine AWS tasks.

·Design, support, and maintain large Splunk environment in a highly available, redundant, geographically.

·Backing up AWS Postgres to S3 on daily job run on EMR using Data Frames.

·Configured AWS virtual private cloud and data base subnet groups for isolation of resources within the amazon RDS in Aurora DB cluster.

·Worked in container-based technologies like Docker, Kubernetes and OpenShift.

·Provide highly durable and available data by using S3 data store, versioning, lifecycle policies, and create AMIs for mission critical production servers for backup.

·Set up and maintained Logging and Monitoring subsystems using tools like; Elasticsearch, Kibana, Prometheus, Grafana and AlertManager.

·Developed DevOps Scripts in Groovy to automate and collection analysis of Cassandra.

·Managed SVN repositories for branching, merging, and tagging and developing Shell/Groovy Scripts for automation purposes.

·Involved in performing application deployment to AWS Elastic Bean Stack environment.

·overhead.

Costco Travels. Seattle, WA January 2020 to September 2020

Role: DevOps Cloud Engineer

Responsibilities:

Prepared capacity and architecture plan to create the Azure Cloud environment to host migrated IaaS, VMs, and PaaS role instances for refactored applications and databases.

Networking, Azure services, Website Deployments, and deployed applications as PaaS (Websites, Web Roles, and Worker Roles).

Created Azure automated assets, Graphical Runbooks, PowerShell run books that will automate specific tasks. Expertise in deploying Azure AD connects, configuring ADFS installation using Azure AD connects.

Involved in migrating SQL server database to SQL Azure database using SQL Azure migration wizard and used Python API to upload agent logs into Azure blob storage.

Developed data marts in the Snowflake cloud data warehouse.

Extracted and loaded data into Azure Blob Storage and Snowflake databases using Azure Data Factory and Data bricks.

Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in In Azure Databricks.

Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in In Azure Databricks.Leveraged Azure App Services to deploy and manage web applications, ensuring scalability and resilience while integrating with Azure Active Directory for secure authentication.

Utilized Azure Application Insights for comprehensive application performance monitoring, enabling real-time visibility into application health and user experiences, facilitating quick diagnostics and performance optimization.

Managed Azure Storage Accounts for scalable, secure data storage solutions, implementing blob storage for unstructured data and leveraging access policies to ensure data security and compliance.

Orchestrated cloud resources efficiently using Azure Resource Groups, enabling structured management and governance of resources, cost management, and streamlined deployment across different environments.

Demonstrated proficiency in implementing and managing DevOps tools, including Selenium for automated web testing to ensure application reliability and JMeter for performance testing, optimizing application response times and scalability

Actively collaborated with development teams to integrate Azure DevOps practices, facilitating seamless CI/CD pipelines that improved deployment frequency and application quality, while also ensuring alignment with project requirements and timelines.

Provided expert guidance on leveraging Azure DevOps features such as build and release pipelines, artifact management, and Azure Repos, enhancing team productivity and fostering a culture of automation and continuous improvement.

Worked closely with developers to understand application architectures and dependencies, enabling the design of efficient deployment strategies and monitoring solutions that ensured high availability and performance in production environments.

Led knowledge-sharing sessions on Azure DevOps best practices, including infrastructure as code, testing automation, and security integration, empowering development teams to adopt these practices in their workflows

Creation and Maintenance of MS Azure Cloud Infrastructure and Virtual Network between MS Azure Cloud and On-premise network for backend communication.

Designed Architecture for API development & deployment as Microservice including Python code in Docker container and Azure Service Fabric.

Developed and Deployed Integration solution using a serverless architecture. Utilized AWS S3, Dynamo DB, EC2, Cognito,

Worked on Azure Fabric, Microservices, IoT & Docker containers in Azure and involved in setting up Terraform continuous build integration system. Used Azure Internal Load Balancer to provide high availability for IaaS VMs, PaaS role instances.

Designed IoT SDK tools for automating Azure IoT Hub testing using Python, Docker, Bash, PowerShell, REST, C#, C++.

Azure Container Repository (ACR), Swagger, Remote Docker Debugging, Junit test suites, Docker Logging, IoT Hub, Device, Module creation and testing, IoT security, TLS, Encryption

Migration of on-premise data (Oracle/ SQL Server/ DB2/ MongoDB) to Azure Data Lake Store (ADLS) using Azure Data Factory (ADF V1/V2).

Environment: Subversion (SVN), Jenkins, JAVA/J2EE, ANT, MAVEN, MS Azure, CHEF, TC Server, Tomcat, Python Scripts, Shell Scripts, Ansible, XML, UNIX, SonarQube, Windows 7, Oracle, JIRA.

Magellan Health, VA October 2018 to December 2019

ROLE: Cloud Engineer

Responsibilities:

·Build, administer, and troubleshoot all mission critical environments (Production, Stage, Dev, Test, QA)

·Perform Health checks on the Production Environment, UAT and the application modules running in these environments.

·Managing Amazon Web Services (AWS) infrastructure with automation and configuration management tools such as Ansible, Puppet, or custom-built. Designing cloud-hosted solutions, specific AWS product suite experience.

·Experience in designing and deploying AWS Solutions using EC2, S3, EBS, Elastic Load Balancer (ELB), Auto scaling groups, AMIs.

·Transform and cleanse the data using AWS EMR using Spark and Hive.

·Experience involving configuring S3 versioning and lifecycle policies to backup files and archive files in Glacier.

·Built Virtualization and Cloud Computing stacks in IaaS & PaaS (Private, Public and Hybrid) using Open Stack dashboard and AWS (EC2 and S3)

·Written Shell scripting to spin up clusters like EMR, EC2, and amazon RDS.

·Perform deployments on AWS using EKS, AWS code pipeline, AWS deploy pipeline.

·Worked on AWS EKS, for deploying docker images that are already being used in On-prem application as a part of migrating the application to cloud.

·Collaborate in the automation of AWS infrastructure via Terraform and Jenkins - software and services configuration via chef cookbooks.

·Used Terraform to transform the infrastructure from on - premise to cloud.

·Configured RDS instances using Cloud Formation and Terraform and used Terraform to map more complex dependencies and identified network issues.

·Designed highly available, cost effective and fault tolerant systems using multiple EC2instances, Auto Scaling, Elastic Load Balancer and AMIs.

·Highly skilled in the usage of data center automation and configuration management tools such as Docker.

·Used AppDynamics to monitor the performance of the application and servers, and Data Dog for monitoring the performance of EKS Cluster and Worker Nodes.

·Deploy and monitor scalable infrastructure on Amazon web services (AWS) & configuration management using Puppet.

·Used Puppet server and workstation to manage and configure nodes. Writing Puppet manifests to automate configuration of a broad range of services.

·Experience in Continuous Integration (CI) and Continuous Deployment (CD) using Jenkins.

·Prepared projects, dashboards, reports, and questions for all JIRA related services. Generated scripts for effective integration of JIRA applications with other tools.

·Automated the build and release management process including monitoring changes between releases.

Environment: AWS, S3, EBS, Elastic Load balancer (ELB), Auto Scaling Groups, CA LISA, VPC, IAM, Cloud Watch, Glacier, Jenkins, Maven, Groovy, Subversion, Terraform, Ant, Bash Scripts, Git, Docker, Jira, Chef, Java 8.

MGM Grand - Las Vegas, NV August 2017 to September 2018

ROLE: DevOps / Cloud Engineer

Responsibilities:

·Creation, Installation, and administration of Red Hat Virtual machines in VMware Environment. Administration of RHEL 5.x/6.x and Solaris 10/11 includes installation, testing, tuning, patching and troubleshooting day-to-day issues.

·Experience in using Tomcat Web Server and JBOSS, WebLogic and WebSphere Application Servers for deployment.

·Created Python Scripts to Automate AWS services which include web servers, ELB, CloudFront Distribution, database, EC2 and database security groups, S3 bucket and application configuration, this Script creates stacks, single servers or joins web servers to stacks.

·Have a good experience in writing many scripts using many popular different languages like Python, Bash and Shell based scripting.

·Installing and configuring Terraform and building the infrastructure using terraform configuration file.

·Built Jenkins jobs to create AWS infrastructure from GitHub repos containing terraform code.

·Involved in Architect, build and maintain Highly Available secure multi-zone AWS cloud infrastructure utilizing Chef, Puppet and Ansible with AWS Terraform and Jenkins for continuous integration.

·Experience in using Jenkins to automate most of the build related tasks.

·Deployed DevOps using Puppet, Dashboard, and Puppet DB for configuration management to existing infrastructure.

·Managing Puppet with GIT, Distributing Puppet Manifests.

·Configured Apache webserver in the Linux AWS Cloud environment using Puppet automation.

·Built and managed a highly available monitoring infrastructure to monitor different application servers and its components using Nagios, with Puppet automation.

·Administered, maintained Red Hat 3.0, 4.0, 5.0, 6.0 AS, ES, Troubleshooting Hardware, Operating System Application & Network problems and performance issues; Deployed latest patches for Linux and Application servers, Performed Red Hat Linux Kernel Tuning.

·Experience in implementing and configuring network services such as HTTP, DHCP, and TFTP.

·Install and configure DHCP, DNS (BIND, MS), web (Apache, IIS), mail (SMTP, IMAP, POP3), and file servers on Linux servers.

·Installation and management of virtual servers using KVM Configure, maintain, and troubleshoot NFS, FTP servers.

· Configure and perform TCP/IP trouble shooting on servers and setting up of ESXi server and its connectivity with the existing equipment.

·Adding, removing, or updating user accounts information, resetting passwords and configuring, administrating Cron Jobs.

·Administered Linux servers for several functions including managing Apache/Tomcat server, mail server, and MySQL databases in both development and production.

·Bash shell-scripts to automate routine activities and Migrated database applications from Windows 2008 Server to Linux server.

·Installing and setting up Oracle9i on Linux for the development team. Linux kernel, memory upgrades and swaps area, Red hat Linux Kickstart Installation.

·Worked in a 3-Tier architecture model supporting web applications hosted on Windows Server 2008

·Performing log administration to detect system errors and resolve systems and process issues reported by Nagios.

·Using Linux Logical Volume Manger (LVM) to manage drives and manage file systems using fdisk, gdisk and provide support for both physical and virtual environment.

·Install, upgrade and manage packages via RPM and YUM package management.

·Install Firmware Upgrades, kernel patches, systems configuration, performance tuning.

Environment: Amazon Web Services, Nagios, Microservices, Jenkins, Java/J2EE, RabbitMQ, Python, Web logic, UNIX, VMware, Artifactory, Shell, Perl, Linux/Ubuntu, IAM, S3, EBS, AWS SDK, Cloud Watch, Terraform, Packer, JSON, Puppet, Docker, Chef, GitHub, Cassandra.

Microinfo, India May 2015 to April 2017

ROLE: LINUX Administrator

·Installing and setting up Oracle9i on Linux for the development team. Linux kernel, memory upgrades and swaps area, Red hat Linux Kickstart Installation.

·Worked in a 3-Tier architecture model supporting web applications hosted on Windows Server 2008

·Performing log administration to detect system errors and resolved systems and process issues reported by Nagios

·Using Linux Logical Volume Manger (LVM) to manage drives and manage file systems using fdisk, gdisk and provide support for both physical and virtual environment

·Install, upgrade and manages packages via RPM and YUM package management

·Install Firmware Upgrades, kernel patches, systems configuration, performance tuning.

·Installing and configuring of security networks like certificates using SSL and worked with NetApp for taking snapshots, mirroring on both SAN and NAS

·Good hands on experience on build environment using ANT

·Used various networking tools such as Ssh, telnet, rlogin, tcp dump, ftp and ping to troubleshoot daily networking issues.

·Worked with DBAs on installation of RDBMS database, restoration and log generation.

Environment: Red Hat Linux 3.0,4.0,5.0 AS ES, HP-DL585, Oracle 9i/10g, VMware Tomcat 3.x,4.x,5.x, Apache Server 1.x,2.x, Bash.



Contact this candidate