I am a U.S.-based consultant specialized in DevOps, Automation and Native Cloud Architecture with about 15 years of DevOps and Software Development experience. During my consulting career, I have worked on projects ranging from leading cloud migrations, to extensive operations transformation and automation, to managing and maintaining Big Data infrastructure.
Outside of my independent consulting career, I’ve worked as a full-time employee at Cray, the supercomputer manufacturer - now part of HPE, where I helped with their initiative to bring High-performance computing (HPC) capabilities to the Cloud and later with Couchbase where I worked on their Couchbase Cloud, a Database-as-a-Service platform. Prior to that, I’ve worked with other full-time employers such as Rakuten, Electronic Arts (internship) and NBCUniversal.
Some of my most recent consulting work include:
- Worked with a successful startup on migrating their infrastructure from ECS to AWS EKS, including development of CI/CD pipelines and Helm charts to support rapid deployment and end-to-end testing on Kubernetes infrastructure.
- Contracted by a Orlando-based hospitality firm to develop Chef cookbooks and Ansible playbooks for their large data-center-hosted infrastructure.
- Contracted by a Chicago-based Univesity to plan and execute the migration of legacy .NET web services into containerized Java/Spring microservices in the AWS Cloud (ECS).
– The project above included getting the university department up to speed with DevOps culture, such as Infrastructure Orchestration practices (used Terraform) and Pipeline as Code (setup Jenkins CI/CD for Docker images build and deploy).
– Setup Puppet Open Source infrastructure for Time Warner Cable analytics team - including Hiera and Foreman - and used it to automate the configuration of complex Hortonworks environments.
– Designed and built Hortonworks HDP clusters and worked with data scientists to tune Spark and YARN resources parameters for optimal execution of memory-intensive Spark jobs.
– Assisted and mentored data scientists and developers in refactoring Spark jobs into testable code by removing environmental dependencies and replacing them with mocks for HDFS and other external dependencies.
– Worked closely and provided operational support for engineers using Hadoop clusters, including ensuring accessibility and security, troubleshooting system and applications issues, installing and configuring new services, and other operational support responsibilities.
– Setup Jenkins CI/CD pipelines for complex Big Data applications, such as Storm topologies and Spark jobs.