We are looking for a Data Engineer to play a pivotal role in developing and implementing a comprehensive data platform for GES business. The successful candidate will have a proven track record of building integrated data tools and platforms to support global businesses. This fast-paced, results-oriented role will involve developing the platform to support key decisions for Global Engineering Services. You will work closely with Engineering, Product and Technical Program teams as you develop forward-looking architecture and build strategies to complement the revolutionary GEIST vision.
In this role, you will have the freedom (and encouragement) to experiment, improve, invent, and innovate on behalf of our customers. You will have the satisfaction of being able to look back and say you were a key contributor to something special from its earliest stages.
Key job responsibilities
Design, implement, and support data warehouse/ data lake infrastructure
using AWS bigdata stack, Python, Redshift, QuickSight, Glue/lake formation,
EMR/Spark, Athena etc.
• Develop and manage ETLs to source data from various systems and create
unified data model for analytics and reporting.
• Creation and support of real-time data pipelines built on AWS technologies
including EMR, Glue, Redshift/Spectrum and Athena.
• Continual research of the latest big data technologies to provide new
capabilities and increase efficiency.
• Manage numerous requests concurrently and strategically, prioritizing
when necessary
• Partner/collaborate across teams/roles to deliver results.
About the team
The GEIST team serves as the software solution team to all Global Engineering needs, working with its customers to set a consolidated Tech strategy, and prioritizing investments to best serve their needs and our bottom line. The GEIST team is responsible for the sustainment of a large product suite ranging from Planning & Forecasting applications to robust in-flight management of field execution users. These systems are leveraged by several thousand internal and external users around the globe.
BASIC QUALIFICATIONS
- Experience in data engineering
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
- Experience with one or more scripting language (e.g., Python, KornShell)
- Knowledge of cloud services such as AWS or equivalent
PREFERRED QUALIFICATIONS
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. We make recruiting decisions based on your experience and skills. We value your passion to discover, invent, simplify and build. Protecting your privacy and the security of your data is a longstanding top priority for Amazon.
Veure més
No et perdis res!
Uneix-te a la comunitat de wijobs i rep per email les millors ofertes d'ocupació
Mai no compartirem el teu email amb ningú i no t'enviarem correu brossa
Subscriu-te araDarreres ofertes d'ocupació de Enginyer/a de Dades a Madrid
Machine Learning Engineer
2 de marçARQUIMEA
Torrejón de Ardoz, ES
Cloud Systems Engineering Lead
28 de febr.GMV
Data Engineer
28 de febr.BASF
Madrid, ES
DevOps Engineer Supply Chain Planning
28 de febr.BASF
Madrid, ES
Data Engineer
28 de febr.Grupo NS
Yard Management DevOps
27 de febr.BASF
Madrid, ES
Data Tranparency DevOps Engineer
27 de febr.BASF
Madrid, ES
Ingeniero DevOps
27 de febr.Quental
Senior DevOps Engineer
27 de febr.BASF
Madrid, ES
DevOps GOLANG
26 de febr.CAS TRAINING
Madrid, ES