KIRAN KUMAR
MB: +1-940-***-****
Email:***************@*****.***
LinkedIn: https://www.linkedin.com/in/kiran-b-a66498265/
Professional Certificate:
Certificate of AWS Certified Solutions Architect – Associate.
VALIDATIONNUMBER:38K9GB6CSN4Q1BWR
Professional Summary:
Successful Cloud/ DevOps Engineer with 9+ years of professional Experience dedicated to automation and optimization. Understands and manages the space between operations and development to quickly deliver code to customers. Has experience with the Cloud, as well as DevOps automation development for Linux systems. Seeking a position in Cloud/ DevOps, to contribute my technical knowledge.
Experienced in Building/maintaining docker containers managed by Kubernetes cluster and deployed Kubernetes pods using helm charts.
Experienced in containerization and clustering technologies like Docker and Kubernetes. Expertise in deploying Kubernetes Cluster on cloud/on-premises environments with master/minion architecture and wrote several YAML files to create many services like pods, deployments auto-scaling, load balancers, labels, health checks, Namespaces, config map, etc.
Performed automation tasks on various Docker components like Docker Hub, Docker Engine, Docker Machine and Docker Registry. Worked with Docker images & Docker Container.
Responsible for creating documentation of the Terraform infrastructure in Confluence and used Terraform to manage the AWS and other cloud infrastructure and managed servers using configuration management tools like Chef or Puppet.
Experience in Linux Administration, Configuration Management, Continuous Integration (CI), Continuous Deployment, Release Management and Cloud Implementations.
Experience in building scalable, fault-tolerant, and resilient cloud-native solutions by leveraging cloud services.
Having Good knowledge of AWS Step Functions and determining the best approach for orchestrating serverless functions and services.
Worked on integrating AWS Fraud Detector APIs into existing applications and workflows, Implemented real-time transaction monitoring and alerting systems. Provisioned and configured AWS resources required for AWS Fraud Detector.
Solid expertise in implementing the CI/CD for Java and .Net applications using different tools like Jenkins, and Ansible, both on AWS and On-premises.
Having Knowledge in provisioning of IaaS, PaaS, SaaS virtual machines and web/worker roles on Microsoft Azure Classic and Azure Resource Manager.
Experience in PaaS with RedHat OpenShift Containers Platform. Architecting, installing, and configuring the platform using tools and technologies like Ansible, Terraform, Bash scripting, Git, Elasticsearch, Fluentd, Logstash, Kibana, Prometheus, Grafana, Alert Manager
Experience in Agile development for OpenShift being part of a Scrum Development team.
Providing support to users and developers for IIS-related issues.
Having a good knowledge on the understanding of concepts like feature engineering, training, and testing models
Knowledge in configuring DNS, DHCP, NFS, SAMBA, FTP, Remote Access Protocol, security management and Security troubleshooting skills.
Exposed to all aspects of the Software Development Life Cycle (SDLC) such as Analysis, Planning, Developing, Testing, and implementing and post-production analysis of the projects.
Proficient in AWS Cloud platform and its features which includes EC2, VPC, EBS, AMI, SNS, RDS, EBS, CloudWatch, Cloud Trail, CloudFormation, AWS CDK, AWS Config, Autoscalling, SST CloudFront, IAM, S3, and R53.
Implemented Amazon EC2 setting up instances, virtual private cloud (VPCs), and security groups.
Developed applications using programming languages such as Python, Java, Node.js, or .NET, ensuring compatibility with AWS services.
Design EC2 instance architecture to meet high availability application architecture and security parameters.
Create AWS instances via Jenkins with EC2 plugin and integrated nodes in Chef via knife command line utility.
Worked with IAM service creating new IAM users & groups, defining roles and policies and Identity providers.
Created alarms and trigger points in CloudWatch based on thresholds and monitored the server's performance, CPU Utilization, and disk usage.
Utilized AWS Cloud watch to monitor the environment for operational & performance metrics during load testing.
Automating the deployment and scaling of the ELK components using infrastructure as code (IaaC) tools and provided ongoing support for the ELK stack, including troubleshooting, and resolving issues.
Experience with AWS Cloud Security services like FMS, ANF, IGW, and WAF for implementing security measures to web applications. Creating and managing rules to protect against web-based attacks,
Implanted Continuous Integration concepts using Hudson, Bamboo, Jenkins and AnthillPro.
Extensively worked on Jenkins/Hudson by configuring and maintaining for the purpose of continuous integration (CI) and for End-to-End automation for all builds and deployments.
Work on source control tools like GIT, perforce on UNIX & Windows Environments migrated subversion repositories to GIT and integrated Eclipse IDE with different versioning tools like Subversion and Git.
Skilled at setting up Baselines, Branching, Merging and Automation Processes using Shell and Batch Scripts and supporting the developers in writing configuration specs.
Implemented Docker-based Continues Integration and Deployment framework.
Install, configure, modify, test & deploy applications on Apache Webserver, Nginx & Tomcat servers.
Experience in working within the Cloud platforms like OpenStack and AWS for integration processes.
Experience in monitoring System/Application Logs of servers using Splunk to detect Prod issues.
Broad experience in Bash, Perl, and Python scripting on Linux. Strong knowledge of Linux internals.
Experience with Bug tracking tools like JIRA, Bugzilla, and Remedy.
Developed Puppet modules to automate deployment, configuration, and lifecycle management of key clusters.
Proposed branching strategy suitable for current application in Subversion.
Experienced with the understanding of the principles and best practices of Software Configuration Management (SCM) processes, which include compiling, packaging, deploying and Application configurations.
Good experience in working with a team together to deliver the best outputs in given time frames. Excellent interpersonal skills, and ability to interact with people at all levels.
Technical Skills:
Operating System
Windows servers 2000/2003/2008, Fedora, UNIX, LINUX, Mac OS. Red Hat Linux ES 5 & 6, IBM AIX 4.3, 5L,6, 7. HP-UX 11.23, Ubuntu, Sun Solaris, VMware ESX 4.0/5.1/5.5, CentOS 5/6/7, SUSE.
CI & CD Tools
Jenkins, Bamboo, Ansible, Chef, Puppet, Saltstack, AWS Code pipeline & deploy
Databases
Oracle, SQL Server 2008/2012, MySQL, NoSQL, DynamoDB, MongoDB
Build Tools
ANT, MAVEN, Gradle
Version Control
SVN, GIT, Perforce, Bitbucket, Clearcase
Scripting
C, Java, Python, Perl, Bash & Shell scripting, Ruby, Groovy
Virtualisation Tools
Docker, VM Virtual Box, VMware
Tracking Tools
Remedy, Jira, ServiceNow, IBM ClearQuest
Monitoring Tools
Nagios, Splunk, CloudWatch, Grafana, Prometheus
Web/Application Servers
Tomcat, JBOSS, Apache, IIS, WebSphere, WebLogic.
Cloud Platform
AWS: EC2, IAM, Elastic Beanstalk, Elastic Load Balancer, RDS, S3, Glacier, SQS, SNS, Cloud Formation, Route53, VPC, CloudWatch, CDK, SST (Serverless Stack) AWS config, EKS, ECR., Amazon OpenSearch Service.
Network Protocols
VPC, Subnets, TCP/IP, switching and routing, DNS, DHCP, IPv4, IPv6, cURL, iperf, Protocols: TCP/UDP, DNS, DHCP, HTTP/HTTPS, FTP, SMTP, ICMP, Telnet, SSH, PuTTY, Traceroute, Ping, Telnet, Netstat
Containerisation
Kubernetes, Docker, EKS, ECS, ECR.
Professional Experience:
Client: Vanguard Oct 2023 to present
Role: AWS Support Engineer
Description: TCM is a new business-critical, Enterprise Outbound Email Service. It’s cloud-based, real-time, and initiated through actions taken by our clients on their accounts. MarTech leverages software as a service with AWS, Boomi, SFMC, and Smarsh to provide Vanguard with a modernized, secure, and reliable transactional email and messaging service.
Designed and implemented infrastructure solutions, including servers, storage networks and cloud services and managed system architecture to ensure scalability, reliability and performance.
Provided Expert-level development and implementation of AWS-based enterprise solutions. Integrates third-party products. Ensures that expected application performance levels are achieved.
Worked on monitoring changes in regulatory policies and identified potential clients and market Smarsh solutions, supported clients n implementing Smarsh solution.
Deployed and maintained cloud infrastructure, ensuring optimal performance, security, and scalability and collaborated with internal teams to provide support.
Worked on providing domain expertise in AWS application deployment methodology and development architecture standards, resolved elevated issues and recommended improvements.
Worked on Troubleshooting and resolve incidents, collaborating with development and IT teams to minimize downtime and maintain service quality.
Automating the build, test, and deployment processes to achieve continuous integration and delivery and Integrated CI/CD pipelines with AWS services such as Lambda, and S3 to deploy applications reliably and efficiently.
Worked with a cross-functional team to gather requirements and inputs needed to establish
JIRA dashboards to be used for easy high-level snapshots of project progress.
Defining security policies and standards for cloud applications and infrastructure, Managing IAM Policies and roles.
Worked on AWS Step Functions for orchestrating serverless workflows.
Worked on writing and deploying code for AWS Lambda functions in supported Python programming languages.
Automated the deployment and scaling of Lambda functions using Infrastructure as a code(Iac) tools like cloud formation or AWS CDK, Troubleshooted the issues related to Lambda function execution, performance, and errors.
Worked with CI/CD pipelines for automated deployment using tools like Bamboo and code repository for BitBucket.
Works with cross-functional team members and communicates system issues at the appropriate technical level.
Worked thoroughly on AWS Application deployment methodology and development architecture standards, resolved elevated issues and recommended improvements.
Ensures the viability of IT deliverables, and development options, including design reviews, and conducts testing, including functionality, technical limitations, and security.
Worked on elevating code into the development, test, and production environments on schedule, submitted changes, control requests and documents.
Worked with IAM service creating new IAM users & groups, defining roles and policies and Identity providers.
Created alarms and trigger points in CloudWatch based on thresholds monitored the Queues and integrated cloudwatch alarms to the PagerDuty.
Utilized AWS Cloud watch to monitor the environment for operational & performance metrics.
Integrated applications with various AWS services such as Amazon S3, DynamoDB, SQS (Simple Queue Service), SNS (Simple Notification Service) and Developed RESTful APIs using AWS API Gateway.
Ensuring security best practices are followed throughout the development process, including data encryption, identity, and access management (IAM), and network security.
Implemented security controls and compliance measures specific to AWS services and monitored and responded to security incidents and vulnerabilities.
Optimized application performance and cost by leveraging AWS features like auto-scaling, caching, and load balancing.
Implementing container orchestration using tools like Kubernetes on AWS or AWS Fargate.
Implemented CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy.
Environment: AWS Lambda, CloudFormation, CLoudwatch, Pagerduty, BitBucket, Bamboo, Splunk, python, S3, SQS, SNS, Rouute53, Storage Gateway, API, Postman, IAM, RDS, JIRA.
Client: CISCO. Sep 2021 to Sep 2023
Role: Sr.Cloud Devops Engineer
Responsibilities:
Implementing and managing continuous delivery systems and methodologies using Jenkins CI/CD Pipeline. Performed Continuous Integration by merging code into a central repository like Git and used the CI server Jenkins to build and validate code with a series of automated tests.
Implemented Datadog to configure monitoring, alerting, & Deploy agents on servers.
Configured Kibana for log analysis and monitoring of application performance on Kubernetes clusters.
Experienced in Installation, configuration & maintenance of Kafka cluster, Monitoring, and performance tuning of Kafka clusters. Ensuring high-throughput and fault-tolerant data processing within Kafka.
Worked on Monitoring Cluster health, Log monitoring, performance metrics, provisioning and managing the Elasticache for Redis Clusters. Managing Backups, snapshots & data persistence options.
Worked on Active Directory Federation Services (ADFS) issues with users Authentication problems with Single Sign-On solutions and provide users with authenticated access to applications that can’t use Integrated Windows Authentication through AD.
Assisted as a SME for Applications Experience Team Globally for Premier and Professional Customers on the Critical Support Requests.
Monitor Mutual Authentication using Kerberos which is a security feature to validate the user’s identity to a service or application. Confirm if the client establishes security context by validating credentials to the underlying security authenticator.
Assisted customers in authentication elements when having Azure AD integrations with other Azure services.
Delivering solutions on customer-facing Multi-Factor Authentication issues for both On-Prem and Azure MFA Cloud-based.
Manage secure access to Azure Services and resources for the user’s authentication that are being validated or invalidated and the Policies that are executed which authenticate credentials to allow the genuine user to access organizational applications registered with Azure.
Provided assistance to the customers on the break-fix during the maintenance of the ADFS certificate transition for the user authentication to meet the daily operations.
Monitored Peers Pending Support Requests as a Subject Matter Expertise in Enterprise Applications, App Registrations, and App Proxy related integrations with Azure AD as Service Provider and Hybrid Auth Provider to successfully resolve the customer roadblocks.
Understand the current state of the customer production environment and determine the impact of new implementation on the existing environment that affects Organizational Resources.
Providing advisories for the customers who are rolling out Conditional Access Policies in the PROD environment to meet their Infrastructure & Security Requirements to increase assurance and protect organizational data.
Aiding in providing right assistance for the end Rapid Response customers with move from MIM to Azure AD IAM solutions as needed.
Environment: Docker, Chef, AWS, S3, Grafana, SonarQube, Prometheus, EBS, RDS, IIS, SVN, ANT, Jenkins, LAMP, Anthill Pro, Maven, Apache Tomcat, Shell, Perl, JFrog, AWS, Ec2, Junit, Python, EKS, ECR.
Client: Incomm payments. Mar 2019 – Aug 2021
Role: Sr. DevOps/Cloud Engineer
Responsibilities:
Integrated Kibana with other DevOps tools such as Prometheus, Grafana, and Jaeger to provide end-to-end monitoring and observability of Kubernetes clusters and applications.
Contributed to the development of NewRelic plugins and integrations with other technologies and platforms, such as AWS, expanding the capabilities of the monitoring solution and enabling cross-functional collaboration.
Developed and maintained log aggregation pipelines using Kubernetes logging frameworks such as EFK (Elasticsearch, Fluentd, Kibana) or ELK (Elasticsearch, Logstash, Kibana).
Used Python to integrate with CI/CD processes to automate build, test, and deployment pipelines. Integrating JFrog Artifactory with build systems, CI/CD Pipelines, and Managing access control for users and groups.
Worked with other teams to ensure the use of Datadog’s features in the CI/CD pipeline.
Development of web modules and middleware components of Java, Json using AWS.
Used Prometheus and Grafana to monitor and analyze cloud-native technologies & provide deep insights into service mesh, serverless, and CI/CD pipelines.
Managed OpenShift master, nodes with upgrades, decommissioning them from active participation by evacuating the nodes and upgrading them.
Worked on storing spark applications, scripts, and configurations, like Git, to track changes & ensure reproducibility.
Wrote Ansible playbooks which are the entry point for Ansible provisioning, where the automation is defined through tasks using YAML format. Run Ansible Scripts to provision Dev servers.
Used Ansible Tower which provides an easy-to-use dashboard and role-based access control which makes it easier to allow individual teams access to use Ansible for their deployments.
Created Reports, Pivots, alerts, advanced Splunk search and Visualization in Splunk enterprise.
Implemented and managed log monitoring and logging tools such as CloudWatch and ELK stack, resulting in improved visibility and faster incident response times.
Used Ruby and Python, to automate provision by Ansible and Terraform for tasks such as encrypting EBS volumes backing AMIs and scheduling Lambda functions for routine AWS tasks.
Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, and Docker, Utilized Kubernetes, and Docker for the runtime environment of the CI/CD system to build, and test deploy.
Monitored new replica deployments using Kubernetes tools such as kubectl, Prometheus, and Grafana
Utilized CloudWatch Logs Insights to quickly identify and troubleshoot application issues, reducing downtime and improving customer experience.
Environment: AWS Lambda, Jenkins, NewRelic, Kibana, OpenShift, EMR, Web logic, JFrog, JIRA, Ansible, Oracle, Terraform, ServiceNow, Python, Spark, Java, Linux, Apache TOMCAT, ELK, GIT, LDAP, NFS, NAS, MS Share point, XML, Fedora, Windows, Splunk, Perl Scripts, Shell Scripts, Prometheus, Grafana, Chef, Docker, Kubernetes
Client: Equifax. May 2017 - Feb 2019
Role: Cloud DevOps Engineer
Responsibilities:
Participated in the release cycle of the product which involves environments like Development QA UAT and Production.
Used Python to automate tasks in Kubernetes/OpenShift environments, and write scripts that integrate with CI/CD
Developed utilities for developers to check the checkouts, and elements modified based on project and branch.
Used Ant Scripts to automate the build process.
Enabled highly secure and reliable communication between Internet of Things (IoT) applications and the devices it manages.
Created and Maintained Subversion Branching, Merging and tagging across each Production release and performed builds with Continuous Integration using Scripts.
Accountable for on-time delivery of all release process outputs as defined in the release policy, processes, and procedures.
Developed UNIX and Perl Scripts for the purpose of manual deployment of the code to the different environments and e-mailed the team when the build was completed.
Worked with ServiceNow's Agile Development module to manage Agile projects and track progress using Scrum or Kanban methodologies.
Created deployment request tickets in Bugzilla for deploying the code to Production.
Attended the Minor/Major Event change control meetings to get necessary approvals for the deployment request.
Suggested the latest upgrades and technologies for NoSQL databases.
Evaluated system performance and validated NoSQL solutions.
Used Perl/Shell to automate the build and deployment Process.
Implemented a Continuous Delivery framework using Jenkins, maven & Nexus in a Linux environment.
Coordinated with developers, Business Analyst, and managers to make sure that code is deployed in the Production environment.
Environment: ANT, WebSphere, Spark, Perl/Shell Scripts, Python, Oracle, UNIX, Bugzilla, Jenkins, Puppet, Maven, AWS, NoSQL.
Client: D2Eminence Mar 2016 – April 2017
Role: Build Release Engineer
Responsibilities:
Automated both .Net and Java Applications using the industry's best automation tool Hudson.
End-to-end automation from Build to Production is implemented.
All unauthorised access is revoked, and the Principle of Least privilege is applied with the aid of these tools.
ITIL best practices were brought into the normal SDLC process and led the effort of bringing change to the organization.
Facilitating the projects in Quality related activities as per the QMS process.
Maintenance of Configuration items in Harvest.
Generalizing Audit Trail Reports and Time Sheet Report.
Prepared mock cutover plans and cutover plans for Pre-Prod and Prod Deployments
Conducting and Attending Project Status Review Meetings and Casual Analysis Meetings for release activities
Prepare the resources (People and environments) for the build / Releases.
Conducting the configuration audits as per schedule reporting the configuration audit findings and tracking the findings to closure.
Ensure that back-ups are taken periodically.
Creating the release responsibility as instructed by the PM.
Responsible for the build and release management process.
Responsible for automated build scripts and build issues.
Coordinating with development teams to perform builds and resolve build issues.
Provide complete phone support to customers.
Set up and debug hardware-related issues for server builds.
Coordinated with developers, Business Analyst, and managers to make sure that code is deployed in the Production environment.
Environment: ANT, Maven, Web logic, Perl Scripts, Shell Scripts, LINUX, SVN, Hudson.
Client: CVL Soft Pvt Ltd. Jan 2015 – Feb 2016
Role: Linux Administrator
Responsibilities:
Administered, and maintained Red Hat 3.0, 4.0, 5.0, 6.0 AS, ES, Troubleshooting Hardware, Operating System Application & Network problems, and performance issues; Deployed latest patches for, Linux and Application servers, Performed Red Hat Linux Kernel Tuning.
Implemented and enhanced UNIX shell scripts using Korn Shell, Bash and Perl Scripting. Shell scripting (bash) to schedule and automate processes including full and incremental backups using tar, migrate and enlarge file systems on Solaris.
Configured Cronjobs in the Crontab file to run jobs at specific intervals.
Created and maintained network users, user environment, directories, and security.
Participating in root-cause analysis of recurring issues, system backup and security setup, Providing 24/7 support in Production Testing and Development environments.
Configured RAID levels 1, and 5 using hardware RAID Controllers and RAID levels 10, 01, and 51 using Volume Management software.
Administered Linux servers for several functions including managing Apache/Tomcat server, mail server, and MySQL databases in both development and production.
Experience in implementing and configuring network services such as HTTP, DHCP, and TFTP.
Install and configure DHCP, DNS (BIND, MS), web (Apache, IIS), mail (SMTP, IMAP, POP3), and file servers on Linux servers.
Experienced working with Preload Assist and PICS projects.
Installing and setting up Oracle9i on Linux for the development team.
Migrated database applications from Windows 2000 Server to Linux server.
Linux kernel, memory upgrades and swaps area. Red hat Linux Kickstart Installation.
Capacity Planning, structure design and ordering systems.
Created users, managed user permissions, and maintained User & File System quota on Red Hat Linux.
Diagnosed hardware and software problems and provided solutions to them.
Updated data in the inventory management package for Software and Hardware products.
Worked with DBAs on installation of RDBMS database, restoration, and log generation.
Bash shell scripts to automate routine activities.
Monitored trouble ticket queue to attend user and system calls.
Environment: Red Hat Linux 3.0,4.0,5.0 AS ES, HP-DL585, Oracle 9i/10g, Samba, VMware Tomcat 3.x, 4.x, 5.x, Apache Server 1.x, 2.x, Bash.