Primary responsibilities include designing and implementing data solutions using industry best practices, building and maintaining batch and real-time data pipelines in production, and maintaining data infrastructure for accurate extraction, transformation, and loading of data from diverse sources. The role involves performing ETL/ELT operations, monitoring data pipelines, and ensuring high service availability.
Required experience includes 3-5 years working with distributed systems like Spark or Kafka, demonstrating expertise in PySpark, data pipelines, cloud-based data technologies, and tools like Databricks and Snowflake. Candidates must possess excellent problem-solving skills, understanding of data engineering responsibilities, and proficiency in Agile Scrum environments.
McAfee offers competitive benefits including a bonus program, pension and retirement plans, comprehensive medical coverage, paid time off, parental leave, and a commitment to diversity and inclusion. The role provides opportunities for continuous development, training, and mentorship, with a focus on supporting innovative data solutions in a dynamic technological environment.