No et perdis res!
Uneix-te a la comunitat de wijobs i rep per email les millors ofertes d'ocupació
Mai no compartirem el teu email amb ningú i no t'enviarem correu brossa
Subscriu-te araInformàtica i IT
264Desenvolupament de Programari
197Comercial i Vendes
162Administració i Secretariat
155Transport i Logística
117Veure més categories
Educació i Formació
106Dret i Legal
88Màrqueting i Negoci
81Comerç i Venda al Detall
77Disseny i Usabilitat
55Enginyeria i Mecànica
53Publicitat i Comunicació
41Instal·lació i Manteniment
35Construcció
32Comptabilitat i Finances
26Sanitat i Salut
25Indústria Manufacturera
22Art, Moda i Disseny
20Hostaleria
20Recursos Humans
20Producte
15Immobiliària
10Atenció al client
9Arts i Oficis
8Banca
8Farmacèutica
5Turisme i Entreteniment
5Cures i Serveis Personals
4Energia i Mineria
4Seguretat
4Alimentació
2Social i Voluntariat
1Agricultura
0Assegurances
0Ciència i Investigació
0Editorial i Mitjans
0Esport i Entrenament
0Telecomunicacions
0Top Zones
Madrid
1.367JUNIOR DATA ENGINEER
NovaInetum
JUNIOR DATA ENGINEER
Inetum · Madrid, ES
Teletreball . Python TSQL Azure Cloud Coumputing PowerShell ITIL Power BI
Company Description
🚀 Join Inetum – We're Hiring a DATA ENGINEER! 🚀
At Inetum, a leading international digital consultancy, we empower 27,000 professionals across 27 countries to shape their careers, foster innovation, and achieve work-life balance. Proudly certified as a Top Employer Europe 2024, we’re passionate about creating positive and impactful digital solutions.
Job Description
We are seeking a highly motivated and technically proficient Data Platform Support Specialist to provide operational support for a data solution built on Azure Data Factory, Snowflake, and Power BI. This is not a development role, but rather a hands-on support position focused on ensuring the reliability, performance, and smooth operation of the data platform.
Qualifications
Key Responsibilities:
- Monitor and maintain data pipelines in Azure Data Factory, ensuring timely and accurate data movement.
- Provide support for Snowflake data warehouse operations, including troubleshooting queries, managing access, and optimizing performance.
- Assist business users with Power BI dashboards and reports, resolving issues related to data refreshes, connectivity, and visualization errors.
- Collaborate with data engineers and analysts to ensure platform stability and data integrity.
- Document support procedures and maintain knowledge base articles for recurring issues.
- Communicate effectively in English with internal teams and stakeholders.
- Advance in English (B2 or above).
- Hands-on experience with:
- Azure Data Factory (monitoring, troubleshooting, pipeline execution).
- Snowflake (SQL, performance tuning, user management).
- Power BI (report troubleshooting, data sources, refresh schedules).
- Strong analytical and problem-solving skills.
- Ability to work independently and manage multiple support tasks.
- Familiarity with ITIL or similar support frameworks is a plus.
- Experience in data governance or data quality monitoring.
- Basic understanding of cloud infrastructure (Azure).
- owledge of scripting (e.g., PowerShell or Python) for automation.
Senior Data Engineer 100 (m/w/d)
17 de nov.Julius Baer
Madrid, ES
Senior Data Engineer 100 (m/w/d)
Julius Baer · Madrid, ES
Python Agile TSQL Jenkins Linux Docker Cloud Coumputing Kubernetes Microservices Git Oracle DevOps Machine Learning
At Julius Baer, we celebrate and value the individual qualities you bring, enabling you to be impactful, to be entrepreneurial, to be empowered, and to create value beyond wealth. Let’s shape the future of wealth management together. Support the development of a Python-based enterprise data hub (integrated with Oracle) and advance the MLOps infrastructure. This role combines DevOps excellence with hands-on machine learning engineering to deliver scalable, reliable, and auditable ML solutions. Key objectives include automating CI/CD pipelines for data and ML workloads, accelerating model deployment, ensuring system stability, enforcing infrastructure-as-code, and maintaining secure, compliant operations.
YOUR CHALLENGE
- Design and maintain CI/CD pipelines for Python applications and machine learning models using GitLab CI/Jenkins, Docker, and Kubernetes
- Develop, train, and evaluate machine learning models (e.g., using scikit-learn, XGBoost, PyTorch) in close collaboration with data scientists
- Orchestrate end-to-end ML workflows including pre-processing, training, hyperparameter tuning, and model validation
- Deploy and serve models in production using containerised microservices (Docker/K8s) and REST/gRPC APIs
- Manage the MLOps lifecycle via tools like MLflow (experiment tracking, model registry) and implement monitoring for drift, degradation, and performance
- Refactor exploratory code (e.g., Jupyter notebooks) into robust, testable, and version-controlled production pipelines
- Collaborate with data engineers to deploy and optimise the data hub, ensuring reliable data flows for training and inference
- Troubleshoot operational issues across infrastructure, data, and model layers; participate in incident response and root cause analysis
YOUR PROFILE
- Technical Proficiency: Strong skills in Python, Linux, CI/CD, Docker, Kubernetes, and MLOps tools (e.g., MLflow). Practical experience with Oracle databases, SQL, and ML frameworks
- ML Engineering Aptitude: Ability to own the full ML lifecycle—from training and evaluation to deployment and monitoring—with attention to reproducibility and compliance
- Automation & Reliability: Committed to building stable, self-healing systems with proactive monitoring and automated recovery
- Collaboration & Communication: Effective team player in agile, cross-functional settings; able to communicate clearly across technical and non-technical audiences
Education and Skills Requirements - Education: Bachelor of Science (BS) in Computer Science, Engineering, Data Science, or related field. Certifications such as CKA, AWS/Azure DevOps Engineer, or Google Cloud Professional DevOps Engineer are a plus
Technical Skills:
- Proficient in Python, Git, and shell scripting
- Experienced with CI/CD pipelines (GitLab, Jenkins), Docker, and Kubernetes
- Skilled in SQL and Oracle database interactions
- Hands-on with MLOps frameworks (e.g., MLflow), model deployment, and monitoring
- Familiarity with microservices, REST/gRPC, and basic ML model evaluation techniques
Experience:- Minimum 5 years in DevOps, SRE, or ML Engineering roles, with at least
- 2–3 years focused on data-intensive or machine learning systems
- Experience in financial services or regulated environments is highly valued
Languages:- English is a must
We are looking forward to receiving your full job application through our online application tool. Further interesting job opportunities can be found on our Career site. Is this not quite what you are looking for? Set up a job alert by creating a candidate account here.
Ingeniero/a de Software Embebido
15 de nov.GMV
Ingeniero/a de Software Embebido
GMV · Madrid, ES
Teletreball Linux
¿Estás listo para dejar tu huella y ser parte de algo grande?Estamos buscando un/a Ingeniero/a de Software Embebido para unirse a nuestro equipo de Ingeniería de Software dentro del Sector de Sistemas de Transporte Inteligente (ITS).Si te entusiasma la tecnología y quieres formar parte de proyectos innovadores en la Gestión de Flotas y ticketing ¡esta es tu oportunidad se ser parte de soluciones que mueven el mundo y transformarán el futuro del Transporte!Nos gusta ir al grano, te
vamos a contar lo que no está en la red. Si quieres saber más sobre nosotros, accede a la web de GMV.
¿A QUÉ RETO TE VAS A ENFRENTAR?
En nuestro equipo desarrollarás las siguentes funciones:
Desarrollo de software embebido: Diseñar, implementar y mantener software embebido para sistemas de transporte inteligente.
Integración de sistemas: Colaborar con equipos multidisciplinarios para integrar soluciones de software con otros sistemas existentes.
Optimización y rendimiento: Mejorar el rendimiento y la eficiencia de los sistemas de gestión de flotas.
Pruebas y validación: Realizar pruebas exhaustivas y validar el software para asegurar su fiabilidad y robustez.
Documentación: Crear y mantener documentación técnica detallada.
¿QUÉ NECESITAMOS EN NUESTRO EQUIPO?
Para este puesto, estamos buscando perfiles con formación en Grado en Ingeniería Informática, Ingeniería de Telecomunicaciones, Ingeniería Electrónica o similar, con experiencia en desarrollo
de software embebido y conocimiento de lenguajes de programación .NET, QML, Linux /C++.También valoraremos experiencia previa y conocimientos de Linux, QT, unit testing y tecnologías frontend para le desarrollo de HMI.
¿QUÉ TE OFRECEMOS?
💻 Modelo de trabajo híbrido y 8 semanas al
año de teletrabajo fuera de tu área geográfica habitual🕑 Horario flexible de entrada y salida, y jornada intensiva viernes y verano. 🚀 Desarrollo de plan de carrera personalizado,
formación y ayuda para el aprendizaje de idiomas.🌍 Movilidad nacional e internacional. ¿Vienes de otro país? Te ofrecemos un relocation package. 💰 Retribución competitiva con revisiones
continuas, retribución flexible y descuento en marcas. 💪Programa Wellbeing: seguro médico,
dental y de accidentes; fruta y café gratis, formación en salud física,
mental y económica, y ¡mucho más!⚠️ En nuestros procesos de selección
siempre tendrás contacto telefónico y personal, presencial u online, con
nuestro equipo de talent acquisition. Además, jamás se solicitarán
transferencias ni tarjetas bancarias. Si contactan contigo siguiendo otro
proceso, escribe a nuestro equipo a la dirección de correo [email protected]❤️Promovemos la igualdad de oportunidades en la
contratación comprometidos con la inclusión y la diversidad
¿A QUE ESPERAS? ÚNETE
Data Engineer (f/m/d)
14 de nov.Axpo Group
Data Engineer (f/m/d)
Axpo Group · Madrid, ES
Teletreball Python Agile TSQL Azure Cloud Coumputing Power BI
Workload: 100%
Join Axpo´s Group IT to help shape the leading IT organization in the European energy sector. You´ll design and scale the data backbone behind our BI solutions-collaborating, innovating, and delivering real impact.
What you will do:
- Design, build, and maintain robust data pipelines with Databricks and Azure Data Factory across on-prem and cloud sources
- Lead and optimize ETL for our data lakehouse to ensure consistency, accuracy, and performance
- Partner with BI engineers to model data for reporting and enable seamless delivery into Power BI
- Improve scalability, performance, and reliability of large-scale data workflows
- Troubleshoot complex pipeline and platform issues with a hands-on, proactive approach
- Champion best practices for data quality, security, and governance with the central data platform team
- Mentor junior engineers and drive knowledge-sharing within the team
What you bring & who you are:
- Solid experience in data engineering with Databricks (Python, PySpark, SQL) and Azure Data Factory
- Proven track record designing and tuning pipelines for high-performance, large-scale environments
- Strong grasp of cloud platforms, data lake and data warehouse concepts
- Experience with GitHub or similar version control and CI/CD tooling
- Comfortable supporting BI topics; experience with Power BI is a plus
- Advanced English skills; German and/or Spanish would be an advantage
- Analytical mindset with a background in engineering, mathematics, business, or a similar field
- Ideally: experience in software engineering and working in agile teams
- Even if you don´t meet every requirement, we encourage you to apply-your potential matters
About the team: You´ll join a friendly, collaborative Business Intelligence team that values openness, learning, and impact. We work cross-functionally, share knowledge, and support diverse perspectives to achieve great results together.
At Axpo Group, we are dedicated to fostering a culture of non-discrimination, tolerance, and inclusion. As an equal opportunity employer, we welcome applications regardless of race and ethnicity, gender identity and expression, sexual orientation, age, disability, as well as socioeconomic, cultural, and religious background. We are committed to ensuring a respectful and inclusive recruiting process and workplace for everyone.
Benefits:
At our company, we strive to create a culture of continuous learning, personal growth, and inter- national community involvement. We´re passionate about providing our employees with the tools and resources they need to succeed, and we´re confident that you´ll love being part of our team!
- Working Hours
We offer flexible working hours to accommodate your work schedule. 60% on remote and 40% at our offices in Madrid, Torre Europa.
- Meal allowances
You can enjoy delicious meals on us, no matter if you are working remotely or on-site.
*Option to use it for public transportation or childcare instead.
- Internet Compensation
We cover the cost of your home internet connection, as we understand how essential connectivity is in the modern workplace.
- Microsoft ESI Certifications
Access to the ESI (Enterprise Skills Initiative) program certification, provides hands-on training for learning and enhancing technical skills and knowledge of Microsoft and Azure technologies.
- Training courses
Our company is committed to helping our employees grow and develop their skills, which is why we offer a variety of industry- specific training courses and a learning channel.
- Gym Coverage
Stay active and healthy with our 90% coverage benefit, which provides access to the nearby gym: Forus Selection to keep you energized throughout the day
- Health Insurance
We take the health and well-being of our employees seriously, which is why we offer a comprehensive health insurance plan and the option to extend it to your spouse and children.
At Axpo Group, we are dedicated to fostering a culture of non-discrimination, tolerance, and inclusion. As an equal opportunity employer, we welcome applications regardless of race and ethnicity, gender identity and expression, sexual orientation, age, disability, as well as socioeconomic, cultural, and religious background. We are committed to ensuring a respectful and inclusive recruiting process and workplace for everyone.
Department IT / Technology Role Permanent position Locations Madrid Remote status Hybrid
Data Engineer WebFOCUS
13 de nov.CMV Consultores
Data Engineer WebFOCUS
CMV Consultores · Madrid, ES
Teletreball TSQL Oracle SQL Server
Desde CMV Consultores te brindamos las mejores oportunidades, en importantes clientes.
Buscamos un SENIOR DATA con experiencia sólida en desarrollo y administración de WebFOCUS, incluyendo diseño de informes, dashboards y mantenimiento de entornos productivos. Conocimiento profundo de las versiones 8 y 9, con experiencia específica en procesos de migración, compatibilidades y cambios de arquitectura. Conocimiento de Iway Dominio de DataMigrator / ETL, ReportCaster y Security Center de WebFOCUS. Experiencia integrando WebFOCUS con bases de datos relacionales (Oracle, SQL Server, etc.) y en optimización de consultas/reportes. Seria ideal que también tenga conocimientos de Qlik.
¿Qué se ofrece?
Contrato indefinido y salario competitivo según valía.
Proyecto a largo plazo
Ingeniero/a de Pruebas de Sistemas
13 de nov.GMV
Madrid, ES
Ingeniero/a de Pruebas de Sistemas
GMV · Madrid, ES
Git QA
¿Buscas un sitio innovador y consolidado en el que desarrollarte profesionalmente? ¡En GMV tienes tu oportunidad perfecta! Estamos ampliando nuestros equipos en el sector Defensa para participar
en el desarrollo de productos de máxima seguridad aplicados a Cross Domain, donde somos referentes tanto a nivel nacional como internacional. Nos gusta ir al grano, te vamos a contar lo que no está en la red. Si quieres saber más sobre nosotros, accede a la web de GMV
¿A QUÉ RETO TE VAS A ENFRENTAR?
Podrás integrarte en el equipo de pruebas y participarás en las actividades de pruebas a nivel de
software y de sistema para productos de seguridad de red. Participarás en la ejecución de pruebas tanto manuales como automatizadas, en el desarrollo/ampliación y mantenimiento de los planes de pruebas y en el análisis y documentación de resultados. Además, intervendrás en la planificación y
documentación de los escenarios (reales y virtuales) de ejecución de pruebas.
¿QUÉ NECESITAMOS EN NUESTRO EQUIPO?
Para este puesto, estamos buscando ingenieros/as con conocimientos de técnicas de pruebas, ingeniería de sistemas y del software, así como experiencia en diseño y automatización de pruebas. Será relevante dominio de lenguajes de script, y herramientas de control de versiones (Subversion,
Git).Será valorable experiencia e interés en seguridad de la información, fundamentos en control de
configuración, certificaciones de QA, y despliegue de plataformas de red (físicas y virtuales).
¿QUÉ TE OFRECEMOS?
🕑 Horario intensivo tres días por semana
(08:00-15:00), siempre siendo uno de estos los viernes y todos
los días del verano (julio y agosto). 🚀 Desarrollo de plan de carrera personalizado y formación.🌍 Movilidad nacional e internacional. ¿Vienes de otro país? Te ofrecemos un relocation package. 💰 Retribución competitiva con revisiones
continuas 💪Programa Wellbeing: seguro médico,
dental y de accidentes; fruta y café gratis, formación en salud física,
mental y económica, y ¡mucho más!⚠️ En nuestros procesos de selección
siempre tendrás contacto telefónico y personal, presencial u online, con
nuestro equipo de talent acquisition. Además, jamás se solicitarán
transferencias ni tarjetas bancarias. Si contactan contigo siguiendo otro
proceso, escribe a nuestro equipo a la dirección de correo [email protected]❤️Promovemos la igualdad de oportunidades en la
contratación comprometidos con la inclusión y la diversidad
¿A QUE ESPERAS? ÚNETE
Machine Learning Engineer
13 de nov.EPAM
Madrid, ES
Machine Learning Engineer
EPAM · Madrid, ES
Python Azure Docker Cloud Coumputing Kubernetes DevOps Machine Learning
We are looking for a Machine Learning Engineer to join our team and drive the development of a scalable machine learning framework and tooling.
You will play a key role in enabling efficient collaboration between data scientists, data engineers and cloud architects. You´ll also help build GenAI-centric tools that improve the ML lifecycle through automation, optimization and observability.
RESPONSIBILITIES
- Design, build and maintain a robust framework to support machine learning projects at scale
- Act as a technical bridge between data science, engineering and cloud infrastructure teams
- Collaborate on the development and deployment of GenAI applications and agents such as LLM pipelines and image generation models
- Deploy models using containerized and serverless infrastructure such as Docker, Kubernetes and Azure Functions
REQUIREMENTS
- Proven experience in MLOps and DevOps practices across the ML lifecycle
- Hands-on experience with cloud platforms, especially Azure: Azure ML, Functions, Storage
- Familiarity with Orchestration of ML pipelines and experiments with MLOps tooling such as MLflow, Vertex AI, Azure Machine Learning, Databricks Workflows and SageMaker
- Solid understanding of model deployment using Docker, Kubernetes and serverless technologies
- Strong software engineering background: Python, CI/CD, testing frameworks
NICE TO HAVE- Experience with GenAI technologies such as Agentic workflows: LangChain, OpenAI tools, custom agents
- Working knowledge of the MCP server or similar scalable serving architectures
- Exposure to retrieval-augmented generation (RAG) or vector database integrations
- Experience working with infrastructure-as-code tools for deploying ML systems on the cloud
WE OFFER- Private health insurance
- EPAM Employees Stock Purchase Plan
- 100% paid sick leave
- Referral Program
- Professional certification
- Language courses
BECA AHE SW Test Engineer
10 de nov.Airbus
Madrid, ES
BECA AHE SW Test Engineer
Airbus · Madrid, ES
Software Integration and Verification & Validation: The ability to verify that the system´s requirements are correctly & completely implemented. The result of this activity may be required for qualification of the system in the frame of Customer acceptance or certification
.-Define interfaces between functions, or between systems and to manage their consistent implementation into system(s)
- Test Preparation, Execution, and Analyze for the functions of a System / Sub-system / Equipment / Component / Module.
The jobholder shall take the following main tasks (under direct coordination with ETZWM):
- Analysis of Problem Reports (PRs) and Engineering Change Requests (ECRs)
- Performance of integration test on the STB to verify the correct implementation of PR solutions and ECRs
- Changes / Adaptations of the Software Test Descriptions and Procedures
- Verification (i.e. inspections/walkthrough) of Test Descriptions and Procedures
- Performance of Formal Qualification Tests of the STB (for either Flight Clearance or Qualification purposes), including collection and analysis of Results.
- Changes / Adaptation of the Data Models in DUET and ODIN
- Generation of Test Documentation (STD, STR)
This job requires an awareness of any potential compliance risks and a commitment to act with integrity, as the foundation for the Company´s success, reputation and sustainable growth.
Company:
Airbus Helicopters España, SA
Employment Type:
Internship
Experience Level:
Student
Job Family:
Software Engineering
DevOps Engineer
10 de nov.HAYS
DevOps Engineer
HAYS · Madrid, ES
Teletreball Docker DevOps Terraform
Desde HAYS estamos colaborando con una conocida firma internacional en el sector tecnológico con sede en Madrid, que ofrece una gama completa de productos, soluciones y servicios tecnológicos, en la cual sus cerca de 130.000 empleados dan soporte a clientes en más de 100 países.Actualmente buscamos incorporar un/a DevOps Engineer que se encargue de diseñar y dar soporte técnico a clientes en la implementación y despliegue de una innovadora plataforma de genómica, orientada al análisis genético y genómico.
¿Qué buscamos?
- Especialista senior en redes
- Proactividad y buena comunicación.
- Castellano o inglés profesional.
¿Cuáles son las funciones?
- Configuración de bases de datos y conectividad.
- Despliegue de servicios
- Sólidos conocimientos en UNIX/BASH y administración de sistemas.
- Configuración de Kubernets (Helm, ArgoCD).
- Familiaridad con K3s, Helm, ArgoCD.
- Docker / Docker Compose.
- Experiencia con firewalls, enrutamiento y sistemas NAS.
- Conocimientos de administración de bases de datos (DBA).
- Se valora conocimientos en Terraform (IaC)
¿Qué ofrecemos?
- Tipo de contrato: Mercantil-Freelance.
- Modalidad: 100% remoto. Salario competitivo.
Estamos esperando perfiles como el tuyo, apasionados con la tecnología y que quieran enfrentarse a un nuevo reto. Si es tu caso, inscríbete en la oferta para que podamos contarte más.