LOG IN
SIGN UP
Back to jobs
Job description
EY is seeking an experienced Data Engineer to create, schedule and optimize data processing pipelines using Apache Spark and Apache Airflow. The role requires expertise in data engineering within a banking context, with a focus on building scalable and efficient data infrastructure for complex analytics projects.
This role involves creating, building, and deploying data pipelines that clean and process data for analysis, developing reusable component pipelines, and identifying process improvements to automate manual processes and optimize data delivery.
The ideal candidate will have minimum 3 years of hands-on experience in building complex data pipelines, strong skills in Apache Spark, data processing, Python, SQL, and experience with deployment automation tools like Docker and Kubernetes. Advanced knowledge of distributed systems, machine learning model training, and cloud environments is crucial.
EY offers a dynamic global work environment with continuous learning opportunities, success-driven culture, transformative leadership development, and a diverse, inclusive workplace. The role provides a chance to work on exciting projects with global brands and develop professional skills in a rapidly evolving technology landscape.
All Rights Reserved | 2024 | Canary Wharfian