¡No te pierdas nada!
Únete a la comunidad de wijobs y recibe por email las mejores ofertas de empleo
Nunca compartiremos tu email con nadie y no te vamos a enviar spam
Suscríbete AhoraInformática e IT
853Comercial y Ventas
840Transporte y Logística
569Adminstración y Secretariado
553Desarrollo de Software
426Ver más categorías
Comercio y Venta al Detalle
340Derecho y Legal
316Educación y Formación
299Marketing y Negocio
259Ingeniería y Mecánica
230Instalación y Mantenimiento
213Diseño y Usabilidad
155Sanidad y Salud
152Industria Manufacturera
118Publicidad y Comunicación
117Construcción
115Hostelería
86Recursos Humanos
86Contabilidad y Finanzas
71Turismo y Entretenimiento
62Producto
46Arte, Moda y Diseño
45Artes y Oficios
41Atención al cliente
40Inmobiliaria
32Cuidados y Servicios Personales
22Alimentación
21Seguridad
20Farmacéutica
19Banca
17Energía y Minería
14Social y Voluntariado
10Deporte y Entrenamiento
4Seguros
4Ciencia e Investigación
1Agricultura
0Editorial y Medios
0Telecomunicaciones
0DevOps Engineer
17 feb.Crypto Fund Trader
DevOps Engineer
Crypto Fund Trader · Nafarroa, ES
Teletrabajo Python Azure Jenkins Docker Cloud Coumputing Kubernetes Ansible AWS Bash DevOps Fintech Terraform
Qué buscamos
En Crypto Fund Trader (CFT) buscamos un/a DevOps Engineer con buena base en automatización, cloud y CI/CD, y ganas de crecer dentro de un proyecto con impacto.
Queremos a alguien con criterio técnico, orientado a la fiabilidad y la eficiencia, con autonomía y capaz de proponer mejoras a nuestra infraestructura y futuros despliegues.
Sobre Crypto Fund Trader
CFT es una prop firm especializada en criptomonedas. Desarrollamos internamente nuestra plataforma: digital, segura y en constante evolución, diseñada para permitir a traders de todo el mundo operar sin fricciones. Somos un equipo joven, horizontal y colaborativo, enfocado en construir un producto escalable y con visión de largo plazo
Responsabilidades
- Diseñar, implementar y mantener pipelines de CI/CD.
- Gestionar y optimizar infraestructuras en la nube.
- Automatizar tareas operativas para mejorar la eficiencia del equipo técnico.
- Colaborar con todo el equipo para asegurar despliegues fiables y repetibles.
- Supervisar sistemas, detectar cuellos de botella y resolver incidencias.
- Mejorar y documentar despliegues, monitorizaciones y respuesta a incidentes.
Requisitos deseados
- Mínimo 2 años de experiencia en roles de DevOps, SRE o similares.
- Experiencia con contenedores y orquestación (Docker, Kubernetes u otros).
- Conocimientos sólidos de integración y entrega continua (CI/CD).
- Familiaridad con monitorización y logging (Prometheus, Grafana u otras).
- Habilidades de scripting en Bash y Python.
- Experiencia con Jenkins y pipelines (Bitbucket, GitHub Actions, GitLab CI)
- Experiencia con IaC: Ansible o Terraform
- Inglés intermedio-avanzado (mín. B2-C1).
Valoramos positivamente
- Experiencia en fintech, trading, entornos de alta disponibilidad o sistemas críticos.
- Experiencia trabajando con proveedores cloud (AWS, GCP, Azure o similares).
- Criterio técnico para proponer mejoras en arquitectura, seguridad y costes.
Qué ofrecemos
- Contrato a jornada completa.
- Trabajo 100% remoto con horario flexible.
- Proyecto real, con capital propio y en crecimiento continuo.
- Equipo técnico joven, horizontal y colaborativo.
- Participación activa en decisiones técnicas y de producto.
- Cultura basada en la confianza, la responsabilidad y el aprendizaje constante.
- Compensación competitiva, ajustada a tu experiencia y aportación al equipo.
¿Te interesa?
Si no cumples todos los requisitos al 100%, no pasa nada. Valoramos especialmente tu actitud, capacidad de aprendizaje y ganas de aportar desde el primer día.
API Platform Developer
17 feb.sg tech
Madrid, ES
API Platform Developer
sg tech · Madrid, ES
API Azure Docker Kubernetes Terraform Kafka
Descripción
En SG Tech impulsamos la transformación digital de grandes organizaciones mediante soluciones tecnológicas robustas, escalables y orientadas a negocio. Colaboramos con clientes de primer nivel en entornos complejos y críticos, apostando por la excelencia técnica, la innovación responsable y el desarrollo continuo del talento. Nuestro modelo de trabajo combina rigor ingenieril, visión de plataforma y una cultura colaborativa orientada a impacto.
Buscamos incorporar un API Platform Developer con una sólida orientación a gobierno de APIs y arquitectura de plataforma, que se responsabilice de definir, evolucionar y operar capacidades transversales de APIs en entornos híbridos y multicloud. La persona se integrará en un contexto altamente estratégico, trabajando con equipos backend y dominios de negocio para garantizar consistencia contractual, seguridad, observabilidad y escalabilidad a nivel global.
El rol tiene un fuerte componente de plataforma: gobierno de estándares OpenAPI y AsyncAPI, gestión de despliegues multi-región, integración de eventos cross-cloud y definición de políticas de seguridad en el edge. Además, participará en la evolución hacia prácticas AI-native a nivel de observabilidad, automatización e ingeniería de plataforma, siendo estos aspectos deseables y formables si existe una base sólida en plataformas API.
Ofrecemos un entorno de alto nivel técnico, con impacto real en plataformas core, acceso a tecnologías punteras y un marco de trabajo híbrido que favorece tanto la colaboración presencial como la flexibilidad. Si te motiva construir plataformas que habilitan ecosistemas digitales a escala, este es tu siguiente paso.
Requisitos
Imprescindibles
- Más de 5 años de experiencia en ingeniería de plataformas o APIs en entornos enterprise.
- Experiencia sólida en gobierno de APIs y gestión de catálogos globales (Apigee Hybrid o soluciones equivalentes).
- Dominio de estándares OpenAPI 3.x y AsyncAPI, y definición de contratos consistentes.
- Experiencia en arquitecturas híbridas y multicloud (Azure y GCP).
- Conocimientos avanzados en mensajería y diseño event-driven con Kafka y Schema Registry.
- Implementación de mecanismos de seguridad en APIs: OAuth 2.1, OpenID Connect y mTLS.
- Experiencia con Kubernetes (AKS principalmente) y despliegues containerizados con Docker.
- Automatización CI/CD y GitOps con GitHub Actions y ArgoCD.
- Uso de herramientas de observabilidad como Datadog y/o Dynatrace.
- Inglés y español fluidos, con nivel mínimo C1 en inglés.
Deseables
- Experiencia en prácticas AI-native aplicadas a plataformas (AIOps, observabilidad semántica, automatización inteligente).
- Conocimiento de arquitecturas RAG, vector databases y patrones de integración con LLMs.
- Experiencia en monetización de APIs, rate limiting, API keys y metering.
- Familiaridad con herramientas como Terraform, Packer, Istio o Linkerd.
- Conocimientos de marcos regulatorios y estándares como Open Banking, ISO 20022 o compliance enterprise.
- Experiencia definiendo estándares de LLMOps, gobierno de prompts y modelos.
Modelo de trabajo:
Híbrido.
- Madrid: 2 días presenciales.
- Resto de ubicaciones: 4 días presenciales.
API Developer + ingles
17 feb.Grupo Digital
Málaga, ES
API Developer + ingles
Grupo Digital · Málaga, ES
API Azure Cloud Coumputing Terraform Kafka
Descripción
Desde Grupo Digital, buscamos para importante banco de ambito internacional
API Developer
Desarrollaras Arquitectura de APIs globales, híbrido cloud y capacidades AI-native en una plataforma BaaS de alcance internacional.
Serás el owner de la capa API: gobierno, automatización y observabilidad con foco en monetización y LLMOps.
Qué harás
Liderar el gobierno global de APIs y catálogo (Apigee Hybrid), aplicando estándares OpenAPI 3.1 y AsyncAPI.
Gestionar despliegues multi-región y conectividad híbrida (Azure & GCP).
Integrar eventos cross-cloud (Kafka ↔ Azure Event Hub) y definir estándares de schema registry.
Implementar OAuth 2.1, OIDC y mTLS en el edge de plataforma.
Impulsar observabilidad AI-native (Datadog AI, Dynatrace Davis, MAISA AI).
Automatizar CI/CD (GitHub Actions), GitOps (ArgoCD) e IaC (Terraform, Packer).
Diseñar RAG para descubrimiento semántico de APIs y validación de contratos con LLM.
Definir metering, rate limiting y capacidades de monetización API.
Colaborar con backend leads para asegurar contratos y formatos consistentes.
Automatizar triage de incidentes y playbooks de auto-remediación con agentes AI
DevSecOps Team Lead
16 feb.BrainRocket
València, ES
DevSecOps Team Lead
BrainRocket · València, ES
Cloud Coumputing Kubernetes Ansible Microservices REST AWS DevOps Fintech Terraform Office
BrainRocket is a global company creating end-to-end tech products for clients across Fintech, iGaming, and Marketing. Young, ambitious, and unstoppable, we´ve already taken Cyprus, Malta, Portugal, Poland, and Serbia by storm. Our BRO team consists of 1,300 bright minds creating innovative ideas and products. We don´t follow formats. We shape them. We build what works, launch it fast, and make sure it hits.
We are seeking a DevSecOps Team Lead to join our team in one of our European offices:
- Belgrade, Serbia
- Lisbon, Portugal
- Sofia City, Bulgaria
- Valencia, Spain
- Warsaw, Poland
No remote, no hybrid. Office presence is required.
Role Mission:
Lead and scale the DevSecOps function by embedding security into CI/CD pipelines, cloud platforms, and Kubernetes environments - enabling engineering teams to deliver secure, compliant, and high-velocity releases.
Key Responsibilities:
• Define the DevSecOps strategy, roadmap, and operating model across the organization.
• Build, mentor, and lead a high-performing DevSecOps team.
• Integrate security into CI/CD pipelines (SAST, DAST, SCA, IaC scanning, secrets scanning).
• Own security for Kubernetes (EKS), Istio, and Service Mesh environments.
• Implement and maintain policy-as-code using OPA and admission controllers.
• Secure infrastructure-as-code using Terraform, Ansible, Helm, and related tooling.
• Drive cloud security across AWS and GCP environments.
• Partner with DevOps teams to provide secure platform architectures, training, and operational support.
• Implement and maintain SIEM, logging, and security monitoring (ELK, Splunk).
• Oversee secrets management, Vault, and privileged access controls.
• Lead automation of security workflows, access control, and compliance processes.
• Ensure alignment with SSDLC (OWASP SAMM v2) and security governance standards.
Requirements:
• 5+ years in DevOps, DevSecOps, or Cloud Security, with leadership or ownership of security initiatives.
• Strong expertise in CI/CD pipelines and secure software delivery.
• Deep knowledge of Kubernetes, Service Mesh (Istio), and container security.
• Hands-on experience with Terraform, Ansible, Helm, or similar tools.
• Strong understanding of cloud security (AWS and/or GCP).
• Experience implementing security scanners in pipelines (SAST, DAST, SCA, IaC).
• Knowledge of microservices architecture and distributed systems.
• Experience with SIEM platforms (ELK, Splunk) and security monitoring.
• Experience with Vault, secrets management, and privileged access control.
• Understanding of networking (TCP/IP, OSI) and secure system design.
• Experience in security risk assessment, mitigation, and automation.
• Familiarity with OWASP SAMM, SSDLC, and secure development practices.
We offer excellent benefits, including but not limited to:
Learning and development opportunities and interesting, challenging tasks.
Opportunity to develop language skills, with partial compensation for the cost of Spanish classes (for localisation purposes).
Relocation package (tickets/2 weeks accommodation, and visa support).
Global coverage health insurance.
Time for proper rest, with 23 working days of annual vacation and an additional 6 paid sick days.
Competitive remuneration level with annual review.
Teambuilding activities.
Bold moves start here. Make yours. Apply today!
By submitting your application, you agree to our Privacy Policy.
DevOps Engineer
16 feb.Swiss Re
Madrid, ES
DevOps Engineer
Swiss Re · Madrid, ES
API Python Agile Azure Cloud Coumputing Kubernetes Bash DevOps Perl Go Terraform Office
Join a team of cybersecurity professionals and help Swiss Re to fulfil its mission in making the world more resilient. As a DevOps Engineer, you´ll be responsible for deploying and operating our data scanning/data discovery solution (BigID) in Kubernetes environments, creating CI/CD pipelines, and integrating data security solutions with our IT landscape. You´ll work in a hybrid setup, perfectly balancing work from home and the office premises.
About the Role
As a DevOps Engineer, you´ll be responsible for protecting Swiss Re´s sensitive data through the development and implementation of processes, tools and strategies that prevent data leakage and misuse.
We are enhancing our capabilities in data discovery, classification and policy enforcement. These improvements enable us to identify sensitive data across the enterprise, automate protection measures and integrate insights into our security operations to better safeguard information and meet regulatory requirements.
We´re looking for a skilled DevOps Engineer who will take on the incentive of implementing the best solution and guiding the development of these engineering services along with a dedicated team of experts.
About the Team
The Security Team is the focal point for all security activities across Swiss Re. We are responsible for cybersecurity engineering and operations, governance, risk and compliance. We define and advance the company´s security strategy.
As a part of the Security Team, the Continuous Security Assurance (CSA) Engineering team owns and develops applications and tools for vulnerability management, penetration testing, and Red Teaming.
We are looking for an expert engineer who´ll help us to integrate vulnerability sensors, process vulnerability data and improve our security operations through automation.
In your role, you will...
- Deploy, operate and optimise data scanning/data discovery solutions (BigID) in Kubernetes environments
- Design and build CI/CD pipelines for security solutions
- Develop and maintain API automations to streamline security processes
- Integrate data security solutions with the broader IT landscape
- Improve metrics and monitoring to ensure the reliability of our security infrastructure
- Utilise existing documentation, source code and logs to understand complex interactions between systems
- Provide security guidance on new products and technologies
- Communicate and collaborate effectively with stakeholders
About You
You´re a passionate security professional who has worked with CI/CD deployment practices and Kubernetes environments. You thrive in collaborative environments and can translate complex technical concepts into practical security solutions. Your technical expertise is complemented by good communication skills and a drive to continuously improve security infrastructure and application landscape.
We are looking for candidates who meet these requirements:
- Bachelor´s degree in Computer Science, Software Engineering or equivalent
- 3+ years of relevant work experience
- Expertise with several of the following areas:
- Kubernetes environments
- Cloud deployment with infrastructure-as-code (Azure preferred)
- CI/CD pipeline design and implementation
- Significant knowledge of major cybersecurity concepts, technologies and standard methods, with a willingness to dive into new areas
- Knowledge of a major public cloud ecosystem (Microsoft Azure preferred)
- Knowledge on microarchitecture design in Azure and other cloud providers, and Azure security tooling
- Familiarity with the implications of security standards in regulated environments
- Experience in automation, coding and/or scripting, using one or more of the following languages: Bash, Golang, Python, Perl, Terraform or similar
- Can-do attitude with a proactive approach toward challenges, producing tangible results
- Excellent communication skills - fluency in English, both spoken and written
These are additional nice to haves:
- API development and automation experience
- Network security, application security and identity management
- Knowledge in data security and data discovery solutions (BigID or similar) is
- Experience with agile development and DevOps
- Experience building integrations to existing systems
For Spain the base salary range for this position is between EUR 42,000 and EUR 70,000 (for a full-time role). The specific salary offered considers:
- the requirements, scope, complexity and responsibilities of the role,
- the applicant´s own profile including education/qualifications, expertise, specialisation, skills and experience.
In the situation where you do not meet all the requirements or you significantly exceed these, the offered salary may be below or above the advertised range.
In addition to your base salary, you may be eligible for additional rewards and benefits including an attractive performance-based bonus.
We provide feedback to all candidates, in case you have not heard from us, please, check your spam folder.
Senior Analytics Engineer
15 feb.Lighthouse
Madrid, ES
Senior Analytics Engineer
Lighthouse · Madrid, ES
. Python TSQL Cloud Coumputing SaaS AWS Excel Terraform Tableau
At Lighthouse, we’re on a mission to disrupt commercial strategy for the hospitality industry. Our innovative commercial platform takes the complexity out of data, empowering businesses with actionable insights, advanced pricing tools, and cutting-edge business intelligence to unlock their full revenue potential.
Backed by $370 million in series C funding and driven by an unwavering passion for growth, we’ve welcomed five companies into our journey and have surpassed $100 million in ARR in 2024. Our 850+ teammates span 35 countries and represent 34 nationalities.
At Lighthouse, we’re more than just a workplace – we’re a community. Collaborative, fun, and deeply committed, we work hard together to revolutionize the hospitality sector. Are you ready to join us and shine brighter in the industry’s most exciting rocket-ship? 🚀
What You Will Do
As a Senior Analytics Engineer, you'll leverage all data sets available within Lighthouse to build products, service, insight and data stories for our Enterprise customer segment. You’ll research how we can cater to needs and sometimes accept that the research didn’t have the outcome hoped for. It encompasses a broad range of use cases and stakeholders that can be served by the same type of data, but, exposed, analysed in different ways.
Where you will have impact
- Deliver impactful research and data stories for our enterprise customers, shaping their commercial strategies.
- Own and drive the development of our data footprint within the Enterprise space, collaborating with the product manager to define strategy.
- Become an expert on Lighthouse's data assets, creatively leveraging them to serve clients like global hotel chains and OTAs.
- Coach and mentor junior members of the analytics team, both within and outside the Enterprise vertical, fostering growth.
- Collaborate closely with business stakeholders and your product manager to understand their needs and translate them into data-driven solutions.
- Communicate complex data concepts and solutions clearly to both technical and non-technical audiences.
- You will be at the forefront of our AI evolution, helping to embed intelligence into our platform. You’ll not only build AI features for our customers but also champion an AI-first development culture within the engineering team.
- Design and execute Proof of Concepts and experiments to validate new ideas and data products.
Lighthouse is not only a data-driven company, we are a data company. The heart of all our products is data. It enables hotels to make the right decisions and fuel our analytical solutions. Being a growth company enables us to regularly attract new and interesting datasets, which can unlock new product directions. Today we process billions of data points and +100TB of data on a daily basis, containing hotels' pricing information, search data, hotel bookings, etc. All of that using modern technologies.
The data solutions team is part of our Enterprise vertical within engineering. It’s a domain and focus area we’ve established a year and a half ago, it entails
- Teams, originally from different companies and acquisition being brought together and integrated into 2 product areas: Data solutions and Distribution.
- It’s focused on data we have, leveraging it in a different way, and using the vastness of datapoints Lighthouse can offer, to support our Enterprise customer in the best way possible.
- It’s ‘a few’ customers being served by a product roadmap. We build and we iterate.
- Flexible time off: Autonomy to manage your work-life balance.
- Alan Flex benefits: 160€/month for food or nursery.
- Flexible retribution: Optional benefits through tax-free payroll deductions for food, transportation and/or nursery.
- Wellbeing support: Subsidized ClassPass subscription.
- Comprehensive health insurance: 100% Alan coverage for you, your spouse, and dependents.
- Impactful work: Shape products relied on by 85,000+ users worldwide.
- Referral bonuses: Earn rewards for bringing in new talent.
- Multiple years of experience in a data analyst, analytics engineer, or data science role, preferably in a SaaS or enterprise software environment.
- Solid relational modeling skills using SQL and programming experience, preferably in Python.
- Hands-on experience with data transformation tools such as dbt.
- Proven ability to create compelling data visualizations and dashboards with tools like Looker, Tableau, or Looker Studio.
- Experience working with major cloud platforms, such as GCP or AWS.
- A talent for crafting compelling data stories and clearly communicating their business impact to diverse stakeholders.
- A keen interest in and knowledge of the latest developments in AI, particularly conversational AI and LLMs.
- Excellent communication skills in both written and spoken English.
- Experience solving complex problems using large, real-world datasets.
SQL (Google’s BigQuery), python, GCP, Looker,Looker Studio / Tableau (whatever makes more sense for the task)†, terraform, and occasionally probably airflow, Excel, Google slides (only if necessary)
Thank you for considering a career with Lighthouse. We are committed to fostering a diverse and inclusive workplace that values equal opportunity for all. We welcome candidates from all backgrounds, regardless of age, gender, race, religion, sexual orientation, and disability. We actively encourage applications from individuals with disabilities and are dedicated to providing reasonable accommodations throughout the recruitment process and during employment to ensure all qualified candidates can participate fully. Our commitment to equality is not just a policy; it's part of our culture.
If you share our passion for innovation and teamwork, we invite you to join us in shaping the future of the hospitality industry. At Lighthouse, our guiding light is to be an equal opportunity employer, and we encourage individuals from all walks of life to apply. Not ticking every box? No problem! We value diverse backgrounds and unique skill sets. If your experience looks a little different from what we've described, but you're passionate about what we do and are a quick learner, we'd love to hear from you.
We value the unique perspective and talents that you bring, and we're excited to see how your light can shine within our team. We can't wait to meet you and explore how we can grow and succeed together, illuminating the path towards a brighter future for the industry.
Data Engineer
15 feb.Lighthouse
Barcelona, ES
Data Engineer
Lighthouse · Barcelona, ES
. Python Cloud Coumputing Kubernetes Terraform Kafka Machine Learning
At Lighthouse, we’re on a mission to disrupt commercial strategy for the hospitality industry. Our innovative commercial platform takes the complexity out of data, empowering businesses with actionable insights, advanced pricing tools, and cutting-edge business intelligence to unlock their full revenue potential.
Backed by $370 million in series C funding and driven by an unwavering passion for growth, we’ve welcomed five companies into our journey and have surpassed $100 million in ARR in 2024. Our 850+ teammates span 35 countries and represent 34 nationalities.
At Lighthouse, we’re more than just a workplace – we’re a community. Collaborative, fun, and deeply committed, we work hard together to revolutionize the hospitality sector. Are you ready to join us and shine brighter in the industry’s most exciting rocket-ship? 🚀
What You Will Do
As a Data Engineer in our new Data Products team, you will play a key role in shaping the quality and business value of our core data assets. You will be hands-on in designing, building, and maintaining the data pipelines that serve teams across Lighthouse. You will act as a bridge between our data and the business, collaborating with stakeholders and ensuring our data effectively enables its consumers.
Where you will have impact
- Become the expert for key data products, understanding the full data lifecycle, quality, and business applications.
- Design, implement, and maintain the streaming and batch data pipelines that power our products and internal analytics.
- Collaborate directly with data consumers to understand their needs, gather requirements, and deliver data solutions.
- Deliver improvements in data quality, latency, and reliability.
- Show a product engineering mindset, focusing on delivering value and solving business problems through data.
- You will be at the forefront of our AI evolution, helping to embed intelligence into our platform. You’ll not only build AI features for our customers but also champion an AI-first development culture within the engineering team.
- Mentor other engineers, sharing your expertise and contributing to their growth.
The Data Products Team is the definitive source of truth for Lighthouse's data, sitting at the foundational layer of our entire data ecosystem.
Their core mission is to model and deliver high-quality, foundational data products that are essential ingredients for all downstream product features, machine learning models, and data science initiatives across the company:
- Data Modeling & Ownership: Defining and optimizing core data entities for product and analytical use.
- Pipeline Engineering: Building robust ETL/ELT pipelines to transform raw integrated data into trusted domains.
- Data Quality: Establishing standards and monitoring the health of all foundational data assets.
What's in it for you?
- Flexible time off: Autonomy to manage your work-life balance.
- Alan Flex benefits: 160€/month for food or nursery.
- Flexible retribution: Optional benefits through tax-free payroll deductions for food, transportation and/or nursery.
- Wellbeing support: Subsidized ClassPass subscription.
- Comprehensive health insurance: 100% Alan coverage for you, your spouse, and dependents.
- Impactful work: Shape products relied on by 85,000+ users worldwide.
- Referral bonuses: Earn rewards for bringing in new talent.
- Experience in a data engineering role, with a proven track record of building scalable data pipelines.
- A product engineering mindset, with a focus on understanding business context and stakeholder needs.
- Professional proficiency in Python for data processing and pipeline development.
- Strong knowledge of cloud database solutions such as BigQuery, Snowflake, or Databricks.
- You are a forward-thinking builder who views AI as a core component of modern architecture. You have a proven interest (or experience) in working with LLMs, agentic workflows, or AI-assisted coding tools to ship higher-quality code, faster.
- Excellent communication and stakeholder management skills.
- Experience with microservice architectures and data streaming systems like Kafka or Google Cloud Pub/Sub.
- Familiarity with data governance or data quality tools such as Atlan or Soda.
- Experience mentoring other engineers.
Mostly, but not limited to: GCP, Python, BigQuery, Kubernetes, Airflow, dbt, Terraform, Atlan (data governance tool), Soda.
Thank you for considering a career with Lighthouse. We are committed to fostering a diverse and inclusive workplace that values equal opportunity for all. We welcome candidates from all backgrounds, regardless of age, gender, race, religion, sexual orientation, and disability. We actively encourage applications from individuals with disabilities and are dedicated to providing reasonable accommodations throughout the recruitment process and during employment to ensure all qualified candidates can participate fully. Our commitment to equality is not just a policy; it's part of our culture.
If you share our passion for innovation and teamwork, we invite you to join us in shaping the future of the hospitality industry. At Lighthouse, our guiding light is to be an equal opportunity employer, and we encourage individuals from all walks of life to apply. Not ticking every box? No problem! We value diverse backgrounds and unique skill sets. If your experience looks a little different from what we've described, but you're passionate about what we do and are a quick learner, we'd love to hear from you.
We value the unique perspective and talents that you bring, and we're excited to see how your light can shine within our team. We can't wait to meet you and explore how we can grow and succeed together, illuminating the path towards a brighter future for the industry.
DevOps
Aubay · Barcelona, ES
Teletrabajo Jenkins Ansible DevOps Terraform
Funciones
Monitorización y análisis del rendimiento, automatización de despliegues, gestión de infraestructura, mejora de procesos CI/CD y soporte a equipos técnicos.
Requisitos
Experiencia con Nexthink y/o Dynatrace, conocimientos en DevOps, Terraform, Ansible y Jenkins. Valorable experiencia en automatización e infraestructura como código.
Modalidad híbrida de trabajo en Barcelona (2 días presencial y 3 días teletrabajo)
*Se valorará positivamente certificado de discapacidad del 33%
Se ofrece
En AUBAY seleccionamos un/a DevOps para Barcelona.
Ofrecemos la posibilidad de formar parte de una Compañía en continuo crecimiento, participando en innovadores proyectos que te permitirán completar tu formación y potenciar tus capacidades. Valoramos el compromiso y la dedicación en el trabajo realizado.
En Aubay somos una multinacional de servicios digitales (DSC) fundada en 1998. Actualmente, con un fuerte crecimiento. Operamos en mercados con un alto valor agregado, tanto en Francia como en otras partes de Europa. En Aubay actualmente tenemos 5 000 personas trabajando.
Desde el asesoramiento hasta todo tipo de proyectos tecnológicos, acompañamos la transformación y modernización de los sistemas de información en todos los sectores, incluidos la industria, I + D, telecomunicaciones e infraestructura, y especialmente los principales bancos y compañías de seguros, que representan más del 80% de nuestra facturación francesa y el 65% de nuestra facturación europea.
Únete a nosotros, ¡te esperamos!
#LI-AL1
Senior DevSecOps Engineer
13 feb.Talan
Madrid, ES
Senior DevSecOps Engineer
Talan · Madrid, ES
Python Agile Scrum Jenkins Docker Cloud Coumputing Ansible Oracle Groovy OpenShift AWS Bash QA Terraform Big Data Salesforce Office
Company Description
Talan - Positive Innovation
Talan is an international consulting group specializing in innovation and business transformation through technology. With over 7,200 consultants in 21 countries and a turnover of €850M, we are committed to delivering impactful, future-ready solutions.
Talan at a Glance
Headquartered in Paris and operating globally, Talan combines technology, innovation, and empowerment to deliver measurable results for our clients. Over the past 22 years, we´ve built a strong presence in the IT and consulting landscape, and we´re on track to reach €1 billion in revenue this year.
Our Core Areas of Expertise
- Data & Technologies: We design and implement large-scale, end-to-end architecture and data solutions, including data integration, data science, visualization, Big Data, AI, and Generative AI.
- Cloud & Application Services: We integrate leading platforms such as SAP, Salesforce, Oracle, Microsoft, AWS, and IBM Maximo, helping clients transition to the cloud and improve operational efficiency.
- Management & Innovation Consulting: We lead business and digital transformation initiatives through project and change management best practices (PM, PMO, Agile, Scrum, Product Ownership), and support domains such as Supply Chain, Cybersecurity, and ESG/Low-Carbon strategies.
We work with major global clients across diverse sectors, including Transport & Logistics, Financial Services, Energy & Utilities, Retail, and Media & Telecommunications.
Job Description
The position is remote, but candidates must be based in Málaga or Madrid.
Project, Role and Task Descriptions:
• Design, implement, and maintain secure CI/CD pipelines for application build, test, and deployment.
• Integrate security scanning, compliance checks, and vulnerability management into development and deployment workflows.
• Automate infrastructure provisioning, configuration, and application deployment using modern DevSecOps tools.
• Collaborate with development, QA, security, and operations teams to ensure security is embedded throughout the SDLC.
• Support and enhance containerization, orchestration, and cloud environments with a strong focus on security best practices.
Qualifications
o CI/CD, Version Control & Security Integration: Experience building enterprise-grade CI/CD pipelines. GitHub (branching, PR workflows, GitHub Actions), GitHub Actions (secure workflows, secrets management, runner configuration), Jenkins (scripted/declarative pipelines, shared libraries), SonarQube (code quality, SAST), Fortify (static code analysis, security scanning). Experience setting up artifact repositories (Nexus, JFrog, ECR)
o Configuration Management & Automation: Ansible (roles, playbooks, secure inventory handling). Puppet (manifests, modules, environment management). Strong understanding of Infrastructure as Code (IaC) concepts and tooling (Terraform or CloudFrormation).
o Scripting & Development : Bash, Python, Groovy (both for Jenkins and development). Ability to write automation scripts.
o Cloud : EC2, S3, IAM (roles, policies, least privilege), VPC networking basics, AWS CloudWatch, SSM, ECS/EKS
o Nice to have : Docker, Openshift, Helm
Additional Information
What do we offer you?
- Possibility to manage work permits.
- Permanent, full-time contract.
- Smart Office Pack so that you can work comfortably from home.
- Training and career development.
- Benefits and perks such as private medical insurance, life insurance, Language lessons, etc
- Possibility to be part of a multicultural team and work on international projects.
If you are passionate about data, development & tech, we want to meet you!