No et perdis res!
Uneix-te a la comunitat de wijobs i rep per email les millors ofertes d'ocupació
Mai no compartirem el teu email amb ningú i no t'enviarem correu brossa
Subscriu-te araTransport i Logística
1.221Comercial i Vendes
1.030Informàtica i IT
972Administració i Secretariat
903Enginyeria i Mecànica
574Veure més categories
Comerç i Venda al Detall
486Educació i Formació
415Desenvolupament de Programari
407Indústria Manufacturera
362Dret i Legal
323Màrqueting i Negoci
310Instal·lació i Manteniment
282Art, Moda i Disseny
189Sanitat i Salut
163Disseny i Usabilitat
154Arts i Oficis
120Publicitat i Comunicació
114Construcció
106Recursos Humans
103Alimentació
92Immobiliària
86Comptabilitat i Finances
84Atenció al client
57Banca
44Hostaleria
43Turisme i Entreteniment
42Cures i Serveis Personals
41Producte
32Seguretat
27Farmacèutica
23Energia i Mineria
19Social i Voluntariat
13Telecomunicacions
11Assegurances
8Esport i Entrenament
6Ciència i Investigació
1Editorial i Mitjans
1Agricultura
0Deutsche Telekom
València, ES
Junior DevOps Engineer(w/m)
Deutsche Telekom · València, ES
MySQL MongoDB Python Agile Azure NoSQL Linux Docker Kubernetes Microservices AWS Bash DevOps PostgreSQL Go
We are Deutsche Telekom IT Spain, located in a digital, scientific, and cultural hub of Spain - Valencia. We work as part of T-Systems Iberia though with a separate structure and projects. We are excited to announce an opening of Junior DevOps position.
A DevOps Engineer is a professional who works at the intersection of software development (Dev) and IT operations (Ops). Their primary goal is to streamline the process of creating, deploying, and maintaining software applications, making it more efficient, reliable, and agile.
A DevOps Engineer bridges the gap between these two areas by using a combination of technical skills, tools, and methodologies. They work closely with both development and operations teams to ensure that the software is developed and deployed smoothly, with minimal downtime and maximum efficiency.
In our company the cross-functional teams work closely with the development teams on their journey to DevOps. It is distributed across different countries: Slovakia, Germany, Greece, Hungary, India etc. In our teams, we aim for automation and slick solutions to provide a modern and stable runtime stack. In our culture, we strive to create a start-up atmosphere in the enterprise.
You will be working within a lean and automated approach using modern technologies and methods like GitOps, Infrastructure-as-a-Code on AWS, CI/CD and, of course, Agile.
You want to learn new technologies and contribute your knowledge? Then join us
on this exciting journey and let's build the future together!
📒 Requirements:
- Currently enrolled in a relevant degree program
- Basic knowledge of Linux
- Understanding of microservices architecture and containerization (preferably Docker and Kubernetes)
- Experience in Scripting (Python, Golang, Bash, etc.)
- Experience with GitVersionControl
- Understanding in debugging of distributed systems
- Readiness to closely collaborate with the development teams
- English B2
- Analytical thinking
- Ability to work in multicultural environment
Would be a plus:
- Experience in CI/CD (Gitlab)
- Experience with Agile development methodologies
- Experience with GitOps
- Clouds (AWS, GCP, Azure)
- Experience with message brokers Kafka/RabbitMQ is desirable
- Relational DataBases (PostgreSQL, MySQL)
- NoSQL DataBases (MongoDB)
🤲What we offer:
- Mentorship & Training: Gain hands-on experience with the support of industry experts.
- Real-World Experience: Work on real projects that impact Deutsche Telekom’s IT strategy.
- Career Development: The opportunity to grow within our organization and explore future full-time opportunities.
- Innovative Environment: Be part of a creative, collaborative, and forward-thinking team in a global tech leader.
Apply now to jumpstart your career with Deutsche Telekom IT Spain!
Data Engineer PowerCenter
29 de gen.NPR Spain
Data Engineer PowerCenter
NPR Spain · Madrid, ES
Teletreball
En NPR Spain buscamos incorporar varios perfiles Data Engineer especialistas en PowerCenter para unirse a la unidad de negocio de Data & Analytics. Si tienes experiencia con esta plataforma y buscas un reto técnico emocionante, esta es tu oportunidad.
💻 ¿QUÉ HARÁS?
Formarás parte de un emocionante proyecto de migración de datos, desde DataStage a PowerCenter. Este proyecto tiene una duración inicial de 1 año, con posibilidades de continuidad en otros interesantes desafíos dentro de la compañía.
🛠 ¿QUÉ REQUISITOS SON NECESARIOS?
• Experiencia mínima de 3 años trabajando con PowerCenter.
🚀 ¿QUÉ VALORAMOS?
• Experiencia previa con DataStage.
• Colaboración en proyectos de migración de datos.
🏢 ¿QUÉ OFRECEMOS?
• Modalidad: 100% remoto (imprescindible residir en España).
• Salario competitivo, totalmente negociable según tu experiencia y conocimientos.
• Desarrollo profesional: Oportunidades de crecimiento y aprendizaje continuo.
Si eres experto en tecnologías de integración de datos, queremos conocerte. ¡Únete a nuestro equipo y marca la diferencia!
Team Leader Data Engineer Azure
29 de gen.NPR Spain
Team Leader Data Engineer Azure
NPR Spain · Madrid, ES
Teletreball Python Azure Cloud Coumputing Big Data
Desde NPR Spain estamos en constante búsqueda de talento para completar nuestro equipo.
Actualmente, buscamos un Team Leader Data Engineer Azure Senior para trabajar en proyectos innovadores dentro de un entorno dinámico y colaborativo.
🔍¿QUÉ BUSCAMOS?
Un profesional con experiencia en ingeniería de datos y un sólido conocimiento en tecnologías de la nube, especialmente en Azure y Databricks, que quiera contribuir al desarrollo de soluciones escalables y eficientes.
💻 REQUISITOS TÉCNICOS:
• Amplia experiencia en Azure y Databricks.
• Dominio de Python y PySpark para el diseño y optimización de flujos de datos.
• Experiencia en la implementación y gestión de Delta Live Tables.
• Conocimiento y aplicación práctica de estrategias Lakehouse Federation.
• Habilidad para diseñar arquitecturas de datos eficientes y escalables en entornos cloud.
🚀 Valoraremos positivamente
• Certificaciones en Azure o Databricks.
• Conocimientos en otras herramientas y tecnologías de Big Data.
🏢 ¿QUÉ OFRECEMOS?
• Contrato indefinido.
• Salario competitivo, acorde a la experiencia y habilidades del candidato.
• Modalidad 100% remota, desde cualquier punto de España.
• Entorno de trabajo dinámico, con oportunidades de aprendizaje y crecimiento profesional.
Si crees que encajas en este perfil y quieres formar parte de un equipo innovador, ¡te estamos esperando!
Ingeniero/a datos Azure
29 de gen.CAS TRAINING
Ingeniero/a datos Azure
CAS TRAINING · Madrid, ES
Teletreball Agile TSQL Azure Jenkins Docker DevOps Terraform
CAS Training, empresa de referencia con más de 20 años en consultoría tecnológica, outsourcing y formación especializada, selecciona a un/a DATA ENGINEER con TRES años de experiencia en proyectos de desarrollo de procesos de ingeniería en entorno Azure: Databricks (imprescindible), Synapse, Datafactory, SQL Database.
Deseable conocimiento en Microsoft Fabric.
Imprescindible experiencia y autonomía en el diseño y la creación de modelos de datos (definición y creación de tablas, complejidad de relaciones y de queries, grandes volumetrías, procedimientos almacenados...)
Imprescindible experiencia en el desarrollo de pipelines y transformaciones de datos complejas (ETLs).
Buen nivel de SQL
Experiencia trabajando en proyectos agile
Valorable conocimiento y experiencia con metodologías y herramientas para despliegue e integración continua (Azure DevOps, Github, Jenkins, Terraform, Docker, Ansible...) • Key words para búsquedas: Azure, Databricks, Synapse, Data Factory
Idiomas: Inglés B2 (al menos)
Sector: Industria.
Formato de trabajo: puede ser 100% remoto, salvo alguna reunión puntual en alguna de las oficinas o cliente (de manera excepcional).
Machine Learning Engineer
28 de gen.Deimos
Tres Cantos, ES
Machine Learning Engineer
Deimos · Tres Cantos, ES
Python Agile Azure Linux C++ Cloud Coumputing AWS MATLAB Machine Learning
DEIMOS is looking for an engineer to join the Artificial Intelligence and Computer Vision (AICV) Competence Centre of the Avionics Business Unit, within the Flight Systems Directorate.
This role focuses on supporting Deimos’ AICV flight systems team in researching, developing, deploying, and scaling our Machine Learning and Computer Vision portfolio for onboard processing applications in Space. You will work on in-space Machine Learning projects and products throughout their lifecycle – from early-phase R&D activities to productization and deployment.
Location:
Selected candidates will have the option to work in any of the following Deimos sites they are legally entitled to work in.
- Harwell (UK)
- Madrid (Spain)
- Lisbon (Portugal)
- Bucharest (Romania)
Duties:
The main responsibilities are:
- Research, design, implement, and deploy Machine Learning algorithms that address specific challenges and opportunities related to onboard processing in Space.
- Collaborate with team members and clients across Europe to understand project requirements, objectives, and constraints.
- Integrate and collaborate closely with other Computer Vision and Machine Learning engineers for Agile algorithm and product development.
- Process and analyze datasets to extract meaningful insights and features.
- Design, implement, and maintain industry-standard infrastructure for new and existing Machine Learning products.
- Optimize and standardize ML training and validation processes, data warehousing and pipelines
Education:
Bachelor’s, Master's, or Ph.D. in Computer Science, Engineering, Machine Learning, Data Science, Physics, or a related fields.
Professional Experience:
The position will be tailored to the level of experience; practical industry experience deploying and maintaining ML systems in production would be viewed very positively.
Technical Requirements:
Required:
- Strong foundation in machine learning algorithms, statistics, and data structures within relevant technical projects.
- Proficiency in programming languages and frameworks such as Python, C++, MATLAB, OpenCV, TensorFlow, Pytorch, scikit-image, and dlib.
- Experience with data preprocessing, feature engineering, and model evaluation techniques.
- Knowledge of software development life cycle and agile methodologies.
- A basic understanding of Machine Learning.
Highly Desirable:
- Understanding of Computer Vision techniques.
- Experience working on aerospace-related projects.
- Experience deploying MLOps solutions and working within CI/CD frameworks
- Experience with Linux systems and cloud infrastructure (AWS, Azure, etc.)
- Experience developing embedded ML applications (C++, CUDA, TensorRT)
Language Skills:
- Good level of English, spoken and written.
- Ability to speak Spanish, Portuguese, Italian, or Romanian will be considered an asset
- Ability to speak any other language will also be considered a positive
Personal Skills:
- Capability to integrate in and work within a trans-European team
- Solid organisational, analytical and reporting skills
- Autonomy and willingness to take initiative
- Excellent communication skills
- Energetic, positive team player mentality
Don’t miss this opportunity! If you meet the requirements and are ready to take on new challenges, apply now with the English version of your resume and become part of our team at Deimos.
Ref.: HRRECRUIT-1003
Data engineer Senior Python
28 de gen.CAS TRAINING
Data engineer Senior Python
CAS TRAINING · Madrid, ES
Teletreball Python Linux Cloud Coumputing Bash Big Data Power BI
CAS Training, empresa de referencia con más de 20 años en consultoría tecnológica, outsourcing y formación especializada, selecciona a un/a DATA ENGINEER con dos -tres años de experiencia en proyectos con python en entornos cloud para proyecto remoto
Se requiere Grado en Ingeniería Informática/Telecomunicaciones, Programación, Matemáticas, Estadística + Máster Data Science, Big Data o en un campo relacionado.
Inglés hablado
Skills
Experiencia previa de al menos 2 años como Data Engineer o Analista de Datos
Participación en el proceso de diseño y creación de pipelines con herramientas Cloud u Open Source.
Programación en Python: conocimiento de programación Orientada a Objetos, diseño y creación de transformaciones de datos, optimización flujos, análisis de datos.
Modelado de Datos: physical data modelling y logical data modelling Migración de tecnología de ETL a stack Cloud u Open Source Hard
Imprescindible tener experiencia en: Linux y bash scripting. Procesamiento de datos con Python.
Diseño modelos de datos. Destreza con bases de datos relacionales y no relacionales.
Analítica de datos usando Python
Analítica de datos usando herramientas de Business Intelligence como Power BI.
Se ofrece
• Formar parte de un equipo dinámico altamente cualificado en una empresa en proceso de expansión.
• Participar en proyectos innovadores y punteros para grandes clientes de primer nivel en distintos sectores de mercado.
• Proyectos de larga duración, estabilidad profesional y progresión laboral.
• Contratación Indefinida.
• Acceso gratuito al catálogo de formación anual de Cas Training.
• Salario negociable en base a la experiencia y valía del candidato/a
• 100% Remoto
Data Engineer - Fintech
27 de gen.Ebury
Madrid, ES
Data Engineer - Fintech
Ebury · Madrid, ES
API Python Agile TSQL Docker Cloud Coumputing REST Jira Fintech Big Data Office
Ebury is a hyper-growth FinTech firm, named in 2021 as one of the top 15 European Fintechs to work for by AltFi. We offer a range of products including FX risk management, trade finance, currency accounts, international payments and API integration.
Data Engineer - Fintech
Madrid Office - Hybrid: 4 days in the office, 1 day working from home
Join Our Data Team at Ebury Madrid Office.
Ebury´s strategic growth plan would not be possible without our Data team and we are seeking a Data Engineer to join our Data Engineering team!
Our data mission is to develop and maintain Ebury´s Data Warehouse and serve it to the whole company, where Data Scientists, Data Engineers, Analytics Engineers and Data Analysts work collaboratively to:
- Build ETLs and data pipelines to serve data in our platform
- Provide clean, transformed data ready for analysis and used by our BI tool
- Develop department and project specific data models and serve these to teams across the company to drive decision making
- Automate end solutions so we can all spend time on high-value analysis rather than running data extracts
Why should you join Ebury?
Want to work in a high-growth environment? We are always growing. Want to build a better world? We believe in inclusion. We stand against discrimination in all forms and have no tolerance for the intolerance of differences that makes us a modern and successful organisation.
At Ebury you will find an internal group dedicated to discussing how we can build a more diverse and inclusive workplace for all people in the Technology Team, so if you´re excited about this job opportunity but your background doesn´t match exactly the requirements in the job description, we strongly encourage you to apply anyways. You may be just the right candidate for this or other positions we have.
What we offer:
- Variety of meaningful and competitive benefits to meet your needs
- Competitive salary
- You´ll have continuous professional growth thanks to our career progression framework with regular reviews
- Equity process through a performance bonus
- Allowance to take annually paid time off as well as during local public holidays
- Continued personal development through training and certification
- Being part of a diverse technology team that cares deeply about culture and best practices, and believes in agile principles
- We are Open Source friendly, following Open Source principles in our internal projects and encouraging contributions to external projects
About our technology and Data stack:
- Google Cloud Platform as our main Cloud provider
- Apache Airflow and dbt Cloud as orchestration tools
- Docker as PaaS to deliver software in containers
- Cloud Build as CICD
- dbt as data modelling and warehousing
- Looker and Looker Studio as Business Intelligence/dashboarding
- Github as code management tool
- Jira as project management tool
Among others third party tools such as: Hevodata, MonteCarlo, Synq...
Responsibilities:
- Be mentored by one of our outstanding performance team member along a 30/60/90 plan designed only for you
- Participate in data modelling reviews and discussions to validate the model´s accuracy, completeness, and alignment with business objectives.
- Design, develop, deploy and maintain ELT/ETL data pipelines from a variety of data sources (transactional databases, REST APIs, file-based endpoints).
- Serve hands-on delivery of data models using solid software engineering practices (eg. version control, testing, CI/CD)
- Manage overall pipeline orchestration using Airflow (hosted in Cloud Composer), as well as execution using GCP hosted services such as Container Registry, Artifact Registry, Cloud Run, Cloud Functions, and GKE.
- Work on reducing technical debt by addressing code that is outdated, inefficient, or no longer aligned with best practices or business needs.
- Collaborate with team members to reinforce best practices across the platform, encouraging a shared commitment to quality.
- Help to implement data governance policies, including data quality standards, data access control, and data classification.
- Identify opportunities to optimise and refine existing processes.
Experience and qualifications
- 3+ years of data/analytics engineering experience building, maintaining & optimising data pipelines & ETL processes on big data environments
- Proficiency in Python, SQL and Airflow
- Knowledge of software engineering practices in data (SDLC, RFC...)
- Stay informed about the latest developments and industry standards in Data
- Fluency in English
Even if you don´t meet every requirement listed, we encourage you to apply-your skills and experience might be a great fit for this role or future opportunities!
We welcome applications from candidates who require a work permit. For non-EU/EEA nationals, the company may assist with the work permit process, depending on individual circumstances.
#LI-CG1
About Us
Ebury is a FinTech success story, positioned among the fastest-growing international companies in its sector.
Founded in 2009, we are headquartered in London and have more than 1700 staff with a presence in more than 25 countries worldwide. Cultural diversity is part of what makes Ebury a special place to be. From Sao Paulo to Dubai, Bucharest to Toronto, we enjoy sharing team experiences and celebrating success across the Ebury family.
Hard work pays off: in 2019, Ebury received a £350 million investment from Banco Santander and has won internationally recognised awards including Financial Times: 1000 Europe´s Fastest-Growing Companies.
None of this would have been possible without our proudest achievement: our great people. Enthusiastic, innovative and collaborative teams, always ready to disrupt and revolutionise the fast-paced FinTech sector.
We believe in inclusion. We stand against discrimination in all forms and have no tolerance for the intolerance of differences that makes us a modern and successful organisation. At Ebury, you can be whoever you want to be and still feel a sense of belonging no matter your story because we want you and your uniqueness to help write our future.
Please submit your application on the careers website directly, uploading your CV / resume in English.
Senior Machine Learning Engineer
27 de gen.EPAM
Chipre, ES
Senior Machine Learning Engineer
EPAM · Chipre, ES
Python TSQL Azure Jenkins Docker Kubernetes Git AWS Spark Machine Learning Office
We are looking for a Senior Machine Learning Engineer with a strong background in data science and software engineering to join us in Cyprus, working from our office in a flexible and hybrid work setup.
As a Machine Learning Engineer you will develop and deploy machine learning models, work with large datasets and collaborate with cross-functional teams to solve business problems.
This position is integral to one of our projects in the clients Finance IT area focusing on the integration component of their finance landscape. If you´re ready to leverage your skills and perspective to make a significant impact, apply now and help us transform our data capabilities in the finance and insurance industries.
#LI-DNI
Responsibilities
- Be responsible for the transition of machine learning algorithms to production environment and integration with enterprise ecosystem
- Design, create, maintain, troubleshoot and optimize the complete end-to-end machine learning lifecycle
- Write specifications, documentation and user guides for developed solutions
- Build frameworks for data scientists to accelerate the development of production-grade machine learning models
- Collaborate with data scientists and engineering team to optimize the performance of ML pipeline
- Constant improvement of SDLC practices
- Establish and configure CI/CD/CT processes
- Design and maintain ML models continuous training
- Provide capabilities for early detection of various drifts (data, concept, schema, etc.)
- Promote and support MLOps practices
Requirements
- 5+ years experience as ML engineer or Data Engineer of designing, building and deploying production applications and data pipelines
- Strong knowledge and experience in Python development
- Deep understanding of Python ML ecosystem (PyTorch, TensorFlow, NumPy, Pandas, Sklearn, XGBoost, etc.)
- Hands-on experience in implementation of Data Products
- Deep understanding of data preparation and feature engineering
- Understanding of Apache Spark Ecosystem (Spark SQL, MLlib/Spark ML)
- Deep hands-on experience with implementation of SDLC best practices in complex IT projects and with data processing paradigms
- Knowledge and experience in computer science disciplines such as data structures, algorithms, and software design patterns
- Experience with some of the MLOps related platform/technology such as AWS SageMaker, Azure ML, GCP Vertex AI/AI Platform, Databricks MLFlow, Kubeflow, Airflow, Argo Workflow, TensorFlow Extended (TFX), etc
- Experience with basic software engineering tools, e.g., git, CI/CD environment (such as Jenkins or Buildkite), PyPi, Docker, Kubernetes, unit testing and general object-oriented design
- Fluent English language knowledge
We offer
- Private healthcare insurance
- Regular performance assessments
- Family friendly initiatives
- Corporate Programs including Employee Referral Program with rewards
- Learning and development opportunities including in-house training and coaching, professional certifications, over 22,000 courses on LinkedIn Learning Solutions and much more
- *All benefits and perks are subject to certain eligibility requirements
DevOps Manager
27 de gen.Krell Consulting & Training
DevOps Manager
Krell Consulting & Training · Madrid, ES
Teletreball Docker Kubernetes AWS DevOps Terraform
Descripción
Nos encontramos en la búsqueda de un/a DeVops Manager para trabajar con importante cliente en contratación inicial con KRELL.
DESCRIPCION Y FUNCIONES
Ingeniería de DevOps con una sólida experiencia en el diseño y la creación de plataformas como servicio (como Kubernetes, Nomad, etc.) que ayudarán al equipo a crear el próximo entorno de ejecución para la ingeniería de datos y productos.
- Amplio conocimiento de entornos de ejecución.
- Amplia capacidad para la resolución de problemas.
- Amplia experiencia con AWS y Terraform.
- Amplia experiencia con observabilidad y alertas (como Grafana, Prometheus, Kibana).
- Liderar e involucrar al equipo en la misión, coordinar el trabajo en equipo.
- Adoptar la propiedad compartida del proyecto y la misión del equipo, contribuyendo a un sentido colectivo de responsabilidad y propósito.
Skils: Kubernetes, Docker, AWS, Terraform.
Inglés alto.
QUE OFRECEMOS
- Contrato indefinido en plantilla y puestos de larga duración.
- Acceso a compañía consolidada con proyectos a muy alto nivel tecnológico y en diferentes sectores.
- Encontrarás un entorno de trabajo dinámico e integrador.
- Trabajo remoto
100% Teletrabajo
¡Te estamos esperando! ¡Inscríbete y da un salto en tu carrera!