No et perdis res!
Uneix-te a la comunitat de wijobs i rep per email les millors ofertes d'ocupació
Mai no compartirem el teu email amb ningú i no t'enviarem correu brossa
Subscriu-te araComercial i Vendes
835Informàtica i IT
832Transport i Logística
581Administració i Secretariat
548Desenvolupament de Programari
430Veure més categories
Comerç i Venda al Detall
342Dret i Legal
312Educació i Formació
309Màrqueting i Negoci
264Enginyeria i Mecànica
234Instal·lació i Manteniment
210Disseny i Usabilitat
159Sanitat i Salut
153Publicitat i Comunicació
126Indústria Manufacturera
116Construcció
110Hostaleria
88Recursos Humans
86Comptabilitat i Finances
73Turisme i Entreteniment
59Art, Moda i Disseny
47Arts i Oficis
45Atenció al client
44Producte
41Immobiliària
31Alimentació
22Cures i Serveis Personals
22Seguretat
22Banca
21Farmacèutica
18Energia i Mineria
16Social i Voluntariat
9Assegurances
3Esport i Entrenament
3Ciència i Investigació
1Editorial i Mitjans
1Telecomunicacions
1Agricultura
0DevOps Engineer
16 de febr.Swiss Re
Madrid, ES
DevOps Engineer
Swiss Re · Madrid, ES
API Python Agile Azure Cloud Coumputing Kubernetes Bash DevOps Perl Go Terraform Office
Join a team of cybersecurity professionals and help Swiss Re to fulfil its mission in making the world more resilient. As a DevOps Engineer, you´ll be responsible for deploying and operating our data scanning/data discovery solution (BigID) in Kubernetes environments, creating CI/CD pipelines, and integrating data security solutions with our IT landscape. You´ll work in a hybrid setup, perfectly balancing work from home and the office premises.
About the Role
As a DevOps Engineer, you´ll be responsible for protecting Swiss Re´s sensitive data through the development and implementation of processes, tools and strategies that prevent data leakage and misuse.
We are enhancing our capabilities in data discovery, classification and policy enforcement. These improvements enable us to identify sensitive data across the enterprise, automate protection measures and integrate insights into our security operations to better safeguard information and meet regulatory requirements.
We´re looking for a skilled DevOps Engineer who will take on the incentive of implementing the best solution and guiding the development of these engineering services along with a dedicated team of experts.
About the Team
The Security Team is the focal point for all security activities across Swiss Re. We are responsible for cybersecurity engineering and operations, governance, risk and compliance. We define and advance the company´s security strategy.
As a part of the Security Team, the Continuous Security Assurance (CSA) Engineering team owns and develops applications and tools for vulnerability management, penetration testing, and Red Teaming.
We are looking for an expert engineer who´ll help us to integrate vulnerability sensors, process vulnerability data and improve our security operations through automation.
In your role, you will...
- Deploy, operate and optimise data scanning/data discovery solutions (BigID) in Kubernetes environments
- Design and build CI/CD pipelines for security solutions
- Develop and maintain API automations to streamline security processes
- Integrate data security solutions with the broader IT landscape
- Improve metrics and monitoring to ensure the reliability of our security infrastructure
- Utilise existing documentation, source code and logs to understand complex interactions between systems
- Provide security guidance on new products and technologies
- Communicate and collaborate effectively with stakeholders
About You
You´re a passionate security professional who has worked with CI/CD deployment practices and Kubernetes environments. You thrive in collaborative environments and can translate complex technical concepts into practical security solutions. Your technical expertise is complemented by good communication skills and a drive to continuously improve security infrastructure and application landscape.
We are looking for candidates who meet these requirements:
- Bachelor´s degree in Computer Science, Software Engineering or equivalent
- 3+ years of relevant work experience
- Expertise with several of the following areas:
- Kubernetes environments
- Cloud deployment with infrastructure-as-code (Azure preferred)
- CI/CD pipeline design and implementation
- Significant knowledge of major cybersecurity concepts, technologies and standard methods, with a willingness to dive into new areas
- Knowledge of a major public cloud ecosystem (Microsoft Azure preferred)
- Knowledge on microarchitecture design in Azure and other cloud providers, and Azure security tooling
- Familiarity with the implications of security standards in regulated environments
- Experience in automation, coding and/or scripting, using one or more of the following languages: Bash, Golang, Python, Perl, Terraform or similar
- Can-do attitude with a proactive approach toward challenges, producing tangible results
- Excellent communication skills - fluency in English, both spoken and written
These are additional nice to haves:
- API development and automation experience
- Network security, application security and identity management
- Knowledge in data security and data discovery solutions (BigID or similar) is
- Experience with agile development and DevOps
- Experience building integrations to existing systems
For Spain the base salary range for this position is between EUR 42,000 and EUR 70,000 (for a full-time role). The specific salary offered considers:
- the requirements, scope, complexity and responsibilities of the role,
- the applicant´s own profile including education/qualifications, expertise, specialisation, skills and experience.
In the situation where you do not meet all the requirements or you significantly exceed these, the offered salary may be below or above the advertised range.
In addition to your base salary, you may be eligible for additional rewards and benefits including an attractive performance-based bonus.
We provide feedback to all candidates, in case you have not heard from us, please, check your spam folder.
DevSecOps Team Lead
16 de febr.BrainRocket
València, ES
DevSecOps Team Lead
BrainRocket · València, ES
Cloud Coumputing Kubernetes Ansible Microservices REST AWS DevOps Fintech Terraform Office
BrainRocket is a global company creating end-to-end tech products for clients across Fintech, iGaming, and Marketing. Young, ambitious, and unstoppable, we´ve already taken Cyprus, Malta, Portugal, Poland, and Serbia by storm. Our BRO team consists of 1,300 bright minds creating innovative ideas and products. We don´t follow formats. We shape them. We build what works, launch it fast, and make sure it hits.
We are seeking a DevSecOps Team Lead to join our team in one of our European offices:
- Belgrade, Serbia
- Lisbon, Portugal
- Sofia City, Bulgaria
- Valencia, Spain
- Warsaw, Poland
No remote, no hybrid. Office presence is required.
Role Mission:
Lead and scale the DevSecOps function by embedding security into CI/CD pipelines, cloud platforms, and Kubernetes environments - enabling engineering teams to deliver secure, compliant, and high-velocity releases.
Key Responsibilities:
• Define the DevSecOps strategy, roadmap, and operating model across the organization.
• Build, mentor, and lead a high-performing DevSecOps team.
• Integrate security into CI/CD pipelines (SAST, DAST, SCA, IaC scanning, secrets scanning).
• Own security for Kubernetes (EKS), Istio, and Service Mesh environments.
• Implement and maintain policy-as-code using OPA and admission controllers.
• Secure infrastructure-as-code using Terraform, Ansible, Helm, and related tooling.
• Drive cloud security across AWS and GCP environments.
• Partner with DevOps teams to provide secure platform architectures, training, and operational support.
• Implement and maintain SIEM, logging, and security monitoring (ELK, Splunk).
• Oversee secrets management, Vault, and privileged access controls.
• Lead automation of security workflows, access control, and compliance processes.
• Ensure alignment with SSDLC (OWASP SAMM v2) and security governance standards.
Requirements:
• 5+ years in DevOps, DevSecOps, or Cloud Security, with leadership or ownership of security initiatives.
• Strong expertise in CI/CD pipelines and secure software delivery.
• Deep knowledge of Kubernetes, Service Mesh (Istio), and container security.
• Hands-on experience with Terraform, Ansible, Helm, or similar tools.
• Strong understanding of cloud security (AWS and/or GCP).
• Experience implementing security scanners in pipelines (SAST, DAST, SCA, IaC).
• Knowledge of microservices architecture and distributed systems.
• Experience with SIEM platforms (ELK, Splunk) and security monitoring.
• Experience with Vault, secrets management, and privileged access control.
• Understanding of networking (TCP/IP, OSI) and secure system design.
• Experience in security risk assessment, mitigation, and automation.
• Familiarity with OWASP SAMM, SSDLC, and secure development practices.
We offer excellent benefits, including but not limited to:
Learning and development opportunities and interesting, challenging tasks.
Opportunity to develop language skills, with partial compensation for the cost of Spanish classes (for localisation purposes).
Relocation package (tickets/2 weeks accommodation, and visa support).
Global coverage health insurance.
Time for proper rest, with 23 working days of annual vacation and an additional 6 paid sick days.
Competitive remuneration level with annual review.
Teambuilding activities.
Bold moves start here. Make yours. Apply today!
By submitting your application, you agree to our Privacy Policy.
Senior Analytics Engineer
15 de febr.Lighthouse
Madrid, ES
Senior Analytics Engineer
Lighthouse · Madrid, ES
. Python TSQL Cloud Coumputing SaaS AWS Excel Terraform Tableau
At Lighthouse, we’re on a mission to disrupt commercial strategy for the hospitality industry. Our innovative commercial platform takes the complexity out of data, empowering businesses with actionable insights, advanced pricing tools, and cutting-edge business intelligence to unlock their full revenue potential.
Backed by $370 million in series C funding and driven by an unwavering passion for growth, we’ve welcomed five companies into our journey and have surpassed $100 million in ARR in 2024. Our 850+ teammates span 35 countries and represent 34 nationalities.
At Lighthouse, we’re more than just a workplace – we’re a community. Collaborative, fun, and deeply committed, we work hard together to revolutionize the hospitality sector. Are you ready to join us and shine brighter in the industry’s most exciting rocket-ship? 🚀
What You Will Do
As a Senior Analytics Engineer, you'll leverage all data sets available within Lighthouse to build products, service, insight and data stories for our Enterprise customer segment. You’ll research how we can cater to needs and sometimes accept that the research didn’t have the outcome hoped for. It encompasses a broad range of use cases and stakeholders that can be served by the same type of data, but, exposed, analysed in different ways.
Where you will have impact
- Deliver impactful research and data stories for our enterprise customers, shaping their commercial strategies.
- Own and drive the development of our data footprint within the Enterprise space, collaborating with the product manager to define strategy.
- Become an expert on Lighthouse's data assets, creatively leveraging them to serve clients like global hotel chains and OTAs.
- Coach and mentor junior members of the analytics team, both within and outside the Enterprise vertical, fostering growth.
- Collaborate closely with business stakeholders and your product manager to understand their needs and translate them into data-driven solutions.
- Communicate complex data concepts and solutions clearly to both technical and non-technical audiences.
- You will be at the forefront of our AI evolution, helping to embed intelligence into our platform. You’ll not only build AI features for our customers but also champion an AI-first development culture within the engineering team.
- Design and execute Proof of Concepts and experiments to validate new ideas and data products.
Lighthouse is not only a data-driven company, we are a data company. The heart of all our products is data. It enables hotels to make the right decisions and fuel our analytical solutions. Being a growth company enables us to regularly attract new and interesting datasets, which can unlock new product directions. Today we process billions of data points and +100TB of data on a daily basis, containing hotels' pricing information, search data, hotel bookings, etc. All of that using modern technologies.
The data solutions team is part of our Enterprise vertical within engineering. It’s a domain and focus area we’ve established a year and a half ago, it entails
- Teams, originally from different companies and acquisition being brought together and integrated into 2 product areas: Data solutions and Distribution.
- It’s focused on data we have, leveraging it in a different way, and using the vastness of datapoints Lighthouse can offer, to support our Enterprise customer in the best way possible.
- It’s ‘a few’ customers being served by a product roadmap. We build and we iterate.
- Flexible time off: Autonomy to manage your work-life balance.
- Alan Flex benefits: 160€/month for food or nursery.
- Flexible retribution: Optional benefits through tax-free payroll deductions for food, transportation and/or nursery.
- Wellbeing support: Subsidized ClassPass subscription.
- Comprehensive health insurance: 100% Alan coverage for you, your spouse, and dependents.
- Impactful work: Shape products relied on by 85,000+ users worldwide.
- Referral bonuses: Earn rewards for bringing in new talent.
- Multiple years of experience in a data analyst, analytics engineer, or data science role, preferably in a SaaS or enterprise software environment.
- Solid relational modeling skills using SQL and programming experience, preferably in Python.
- Hands-on experience with data transformation tools such as dbt.
- Proven ability to create compelling data visualizations and dashboards with tools like Looker, Tableau, or Looker Studio.
- Experience working with major cloud platforms, such as GCP or AWS.
- A talent for crafting compelling data stories and clearly communicating their business impact to diverse stakeholders.
- A keen interest in and knowledge of the latest developments in AI, particularly conversational AI and LLMs.
- Excellent communication skills in both written and spoken English.
- Experience solving complex problems using large, real-world datasets.
SQL (Google’s BigQuery), python, GCP, Looker,Looker Studio / Tableau (whatever makes more sense for the task)†, terraform, and occasionally probably airflow, Excel, Google slides (only if necessary)
Thank you for considering a career with Lighthouse. We are committed to fostering a diverse and inclusive workplace that values equal opportunity for all. We welcome candidates from all backgrounds, regardless of age, gender, race, religion, sexual orientation, and disability. We actively encourage applications from individuals with disabilities and are dedicated to providing reasonable accommodations throughout the recruitment process and during employment to ensure all qualified candidates can participate fully. Our commitment to equality is not just a policy; it's part of our culture.
If you share our passion for innovation and teamwork, we invite you to join us in shaping the future of the hospitality industry. At Lighthouse, our guiding light is to be an equal opportunity employer, and we encourage individuals from all walks of life to apply. Not ticking every box? No problem! We value diverse backgrounds and unique skill sets. If your experience looks a little different from what we've described, but you're passionate about what we do and are a quick learner, we'd love to hear from you.
We value the unique perspective and talents that you bring, and we're excited to see how your light can shine within our team. We can't wait to meet you and explore how we can grow and succeed together, illuminating the path towards a brighter future for the industry.
Data Engineer
15 de febr.Lighthouse
Barcelona, ES
Data Engineer
Lighthouse · Barcelona, ES
. Python Cloud Coumputing Kubernetes Terraform Kafka Machine Learning
At Lighthouse, we’re on a mission to disrupt commercial strategy for the hospitality industry. Our innovative commercial platform takes the complexity out of data, empowering businesses with actionable insights, advanced pricing tools, and cutting-edge business intelligence to unlock their full revenue potential.
Backed by $370 million in series C funding and driven by an unwavering passion for growth, we’ve welcomed five companies into our journey and have surpassed $100 million in ARR in 2024. Our 850+ teammates span 35 countries and represent 34 nationalities.
At Lighthouse, we’re more than just a workplace – we’re a community. Collaborative, fun, and deeply committed, we work hard together to revolutionize the hospitality sector. Are you ready to join us and shine brighter in the industry’s most exciting rocket-ship? 🚀
What You Will Do
As a Data Engineer in our new Data Products team, you will play a key role in shaping the quality and business value of our core data assets. You will be hands-on in designing, building, and maintaining the data pipelines that serve teams across Lighthouse. You will act as a bridge between our data and the business, collaborating with stakeholders and ensuring our data effectively enables its consumers.
Where you will have impact
- Become the expert for key data products, understanding the full data lifecycle, quality, and business applications.
- Design, implement, and maintain the streaming and batch data pipelines that power our products and internal analytics.
- Collaborate directly with data consumers to understand their needs, gather requirements, and deliver data solutions.
- Deliver improvements in data quality, latency, and reliability.
- Show a product engineering mindset, focusing on delivering value and solving business problems through data.
- You will be at the forefront of our AI evolution, helping to embed intelligence into our platform. You’ll not only build AI features for our customers but also champion an AI-first development culture within the engineering team.
- Mentor other engineers, sharing your expertise and contributing to their growth.
The Data Products Team is the definitive source of truth for Lighthouse's data, sitting at the foundational layer of our entire data ecosystem.
Their core mission is to model and deliver high-quality, foundational data products that are essential ingredients for all downstream product features, machine learning models, and data science initiatives across the company:
- Data Modeling & Ownership: Defining and optimizing core data entities for product and analytical use.
- Pipeline Engineering: Building robust ETL/ELT pipelines to transform raw integrated data into trusted domains.
- Data Quality: Establishing standards and monitoring the health of all foundational data assets.
What's in it for you?
- Flexible time off: Autonomy to manage your work-life balance.
- Alan Flex benefits: 160€/month for food or nursery.
- Flexible retribution: Optional benefits through tax-free payroll deductions for food, transportation and/or nursery.
- Wellbeing support: Subsidized ClassPass subscription.
- Comprehensive health insurance: 100% Alan coverage for you, your spouse, and dependents.
- Impactful work: Shape products relied on by 85,000+ users worldwide.
- Referral bonuses: Earn rewards for bringing in new talent.
- Experience in a data engineering role, with a proven track record of building scalable data pipelines.
- A product engineering mindset, with a focus on understanding business context and stakeholder needs.
- Professional proficiency in Python for data processing and pipeline development.
- Strong knowledge of cloud database solutions such as BigQuery, Snowflake, or Databricks.
- You are a forward-thinking builder who views AI as a core component of modern architecture. You have a proven interest (or experience) in working with LLMs, agentic workflows, or AI-assisted coding tools to ship higher-quality code, faster.
- Excellent communication and stakeholder management skills.
- Experience with microservice architectures and data streaming systems like Kafka or Google Cloud Pub/Sub.
- Familiarity with data governance or data quality tools such as Atlan or Soda.
- Experience mentoring other engineers.
Mostly, but not limited to: GCP, Python, BigQuery, Kubernetes, Airflow, dbt, Terraform, Atlan (data governance tool), Soda.
Thank you for considering a career with Lighthouse. We are committed to fostering a diverse and inclusive workplace that values equal opportunity for all. We welcome candidates from all backgrounds, regardless of age, gender, race, religion, sexual orientation, and disability. We actively encourage applications from individuals with disabilities and are dedicated to providing reasonable accommodations throughout the recruitment process and during employment to ensure all qualified candidates can participate fully. Our commitment to equality is not just a policy; it's part of our culture.
If you share our passion for innovation and teamwork, we invite you to join us in shaping the future of the hospitality industry. At Lighthouse, our guiding light is to be an equal opportunity employer, and we encourage individuals from all walks of life to apply. Not ticking every box? No problem! We value diverse backgrounds and unique skill sets. If your experience looks a little different from what we've described, but you're passionate about what we do and are a quick learner, we'd love to hear from you.
We value the unique perspective and talents that you bring, and we're excited to see how your light can shine within our team. We can't wait to meet you and explore how we can grow and succeed together, illuminating the path towards a brighter future for the industry.
Senior DevSecOps Engineer
13 de febr.Talan
Madrid, ES
Senior DevSecOps Engineer
Talan · Madrid, ES
Python Agile Scrum Jenkins Docker Cloud Coumputing Ansible Oracle Groovy OpenShift AWS Bash QA Terraform Big Data Salesforce Office
Company Description
Talan - Positive Innovation
Talan is an international consulting group specializing in innovation and business transformation through technology. With over 7,200 consultants in 21 countries and a turnover of €850M, we are committed to delivering impactful, future-ready solutions.
Talan at a Glance
Headquartered in Paris and operating globally, Talan combines technology, innovation, and empowerment to deliver measurable results for our clients. Over the past 22 years, we´ve built a strong presence in the IT and consulting landscape, and we´re on track to reach €1 billion in revenue this year.
Our Core Areas of Expertise
- Data & Technologies: We design and implement large-scale, end-to-end architecture and data solutions, including data integration, data science, visualization, Big Data, AI, and Generative AI.
- Cloud & Application Services: We integrate leading platforms such as SAP, Salesforce, Oracle, Microsoft, AWS, and IBM Maximo, helping clients transition to the cloud and improve operational efficiency.
- Management & Innovation Consulting: We lead business and digital transformation initiatives through project and change management best practices (PM, PMO, Agile, Scrum, Product Ownership), and support domains such as Supply Chain, Cybersecurity, and ESG/Low-Carbon strategies.
We work with major global clients across diverse sectors, including Transport & Logistics, Financial Services, Energy & Utilities, Retail, and Media & Telecommunications.
Job Description
The position is remote, but candidates must be based in Málaga or Madrid.
Project, Role and Task Descriptions:
• Design, implement, and maintain secure CI/CD pipelines for application build, test, and deployment.
• Integrate security scanning, compliance checks, and vulnerability management into development and deployment workflows.
• Automate infrastructure provisioning, configuration, and application deployment using modern DevSecOps tools.
• Collaborate with development, QA, security, and operations teams to ensure security is embedded throughout the SDLC.
• Support and enhance containerization, orchestration, and cloud environments with a strong focus on security best practices.
Qualifications
o CI/CD, Version Control & Security Integration: Experience building enterprise-grade CI/CD pipelines. GitHub (branching, PR workflows, GitHub Actions), GitHub Actions (secure workflows, secrets management, runner configuration), Jenkins (scripted/declarative pipelines, shared libraries), SonarQube (code quality, SAST), Fortify (static code analysis, security scanning). Experience setting up artifact repositories (Nexus, JFrog, ECR)
o Configuration Management & Automation: Ansible (roles, playbooks, secure inventory handling). Puppet (manifests, modules, environment management). Strong understanding of Infrastructure as Code (IaC) concepts and tooling (Terraform or CloudFrormation).
o Scripting & Development : Bash, Python, Groovy (both for Jenkins and development). Ability to write automation scripts.
o Cloud : EC2, S3, IAM (roles, policies, least privilege), VPC networking basics, AWS CloudWatch, SSM, ECS/EKS
o Nice to have : Docker, Openshift, Helm
Additional Information
What do we offer you?
- Possibility to manage work permits.
- Permanent, full-time contract.
- Smart Office Pack so that you can work comfortably from home.
- Training and career development.
- Benefits and perks such as private medical insurance, life insurance, Language lessons, etc
- Possibility to be part of a multicultural team and work on international projects.
If you are passionate about data, development & tech, we want to meet you!
Sr Backend Engineer-API Payments (AI-Native)
13 de febr.CAS TRAINING
Málaga, ES
Sr Backend Engineer-API Payments (AI-Native)
CAS TRAINING · Málaga, ES
Java Kubernetes REST Spring Microservices Terraform Kafka
Descripción del puesto
Buscamos un Senior Backend Engineer especializado en APIs y pagos para incorporarse a un entorno bancario de alto nivel dentro de una plataforma global BaaS.
El rol se centrará en el diseño, desarrollo y mantenimiento de microservicios y APIs dentro de un ecosistema cloud-native, con foco en dominios de pagos, cobros y cash management.
Responsabilidades principales
• Diseñar y mantener APIs de dominio (Payments/Collections).
• Documentar APIs REST con OpenAPI 3.1 y AsyncAPI.
• Implementar autenticación y autorización segura: OAuth 2.1, OpenID Connect (OIDC), mTLS, JWT
• Desarrollar microservicios con: Java 17+ (Spring Boot 3) – principal, Python 3.11+ (FastAPI) – secundario
• Implementar flujos event-driven con Apache Kafka.
• Integrar bases de datos relacionales, cachés y vector DB.
• Construir pipelines CI/CD con GitHub Actions y ArgoCD.
• Gestionar infraestructura como código con Terraform.
• Implementar observabilidad con Datadog, Dynatrace, Prometheus y ELK.
o Asegurar cumplimiento de normativas: PSD2, GDPR, DORA
• Liderar decisiones técnicas y mentorizar a otros ingenieros.
Requisitos imprescindibles
• 7+ años de experiencia en desarrollo backend y APIs.
• Experiencia en pagos o cash management.
• Dominio de: Java (Spring Boot), Python (FastAPI)
• Experiencia con: Kafka, PostgreSQL, Redis
• Conocimientos sólidos de seguridad: OAuth 2.1, OIDC, mTLS
• Experiencia con: Kubernetes, Terraform, GitOps (ArgoCD)
• Conocimiento de normativas financieras: PSD2, GDPR, DORA
• Inglés alto (imprescindible).
Requisitos deseables
• Experiencia con: Vector databases, RAG, LangChain
• Experiencia en LLMOps y AI governance.
• Conocimiento de Confluent Platform.
DevOps
Aubay · Barcelona, ES
Teletreball Jenkins Ansible DevOps Terraform
Funciones
Monitorización y análisis del rendimiento, automatización de despliegues, gestión de infraestructura, mejora de procesos CI/CD y soporte a equipos técnicos.
Requisitos
Experiencia con Nexthink y/o Dynatrace, conocimientos en DevOps, Terraform, Ansible y Jenkins. Valorable experiencia en automatización e infraestructura como código.
Modalidad híbrida de trabajo en Barcelona (2 días presencial y 3 días teletrabajo)
*Se valorará positivamente certificado de discapacidad del 33%
Se ofrece
En AUBAY seleccionamos un/a DevOps para Barcelona.
Ofrecemos la posibilidad de formar parte de una Compañía en continuo crecimiento, participando en innovadores proyectos que te permitirán completar tu formación y potenciar tus capacidades. Valoramos el compromiso y la dedicación en el trabajo realizado.
En Aubay somos una multinacional de servicios digitales (DSC) fundada en 1998. Actualmente, con un fuerte crecimiento. Operamos en mercados con un alto valor agregado, tanto en Francia como en otras partes de Europa. En Aubay actualmente tenemos 5 000 personas trabajando.
Desde el asesoramiento hasta todo tipo de proyectos tecnológicos, acompañamos la transformación y modernización de los sistemas de información en todos los sectores, incluidos la industria, I + D, telecomunicaciones e infraestructura, y especialmente los principales bancos y compañías de seguros, que representan más del 80% de nuestra facturación francesa y el 65% de nuestra facturación europea.
Únete a nosotros, ¡te esperamos!
#LI-AL1
FullStack Developer (AI Native DevEx Engineer)
12 de febr.sg tech
Madrid, ES
FullStack Developer (AI Native DevEx Engineer)
sg tech · Madrid, ES
React API Node.js Python TypeScript Postman Terraform
Descripción
En SG Tech buscamos un AI-Native Full Stack / Developer Experience Engineer responsable de diseñar y entregar experiencias de desarrollador de última generación para la plataforma global BaaS. La posición es responsable de los portales para desarrolladores, SDKs, entornos sandbox y agentes de documentación y soporte impulsados por IA.
Se requiere un perfil mid-senior con sólida experiencia en front-end y API Developer Experience. La experiencia en IA es valorable, pero no indispensable.
Requisitos
Responsabilidades Principales
Developer Portals y Frontend
• Construir y mantener portales de APIs utilizando Apigee Developer Portal, React, Next.js y TypeScript.
• Implementar funcionalidades orientadas a desarrolladores: generación de SDKs multi-lenguaje, descubrimiento semántico de APIs, analítica de uso y soporte para monetización (mecanismos de metering).
• Desarrollar herramientas front-end para seguimiento de SLAs, telemetría del desarrollador y visualización de flujos de API.
Sandbox, Testing y Documentación
• Gestionar los entornos sandbox y de pruebas (colecciones Postman, contract testing) para partners externos y equipos internos.
• Automatizar la generación de documentación y changelogs mediante herramientas asistidas por LLM; responsabilizarse de la versión y validación de documentación.
Soporte Conversacional y Developer Tools
• Proveer soporte conversacional para desarrolladores (ChatOps) mediante agentes respaldados por ChatGPT Enterprise, integrados en el portal y en Slack.
• Colaborar con el equipo de Plataforma (Global BaaS) para integraciones con el API Gateway y con los Backend Leads para aplicaciones de ejemplo y casos de dominio.
Operaciones DevEx
• Mantener pipelines de CI/CD para los servicios de DevEx y garantizar la observabilidad (Prometheus, Datadog).
Las responsabilidades vinculadas a IA son opcionales y pueden adquirirse durante el onboarding.
Core Tech Stack
Frontend: React, Next.js, TypeScript
Backend: Node.js (NestJS/Express), Python (FastAPI)
API Management & Portal: Apigee Developer Portal
Testing & Sandbox: Postman, Pact, JMeter, K6
Observabilidad: Prometheus, ELK, Datadog
CI/CD & IaC: GitHub Actions, ArgoCD, Terraform
Herramientas de IA: ChatGPT Enterprise, GitHub Copilot, OpenAI/Azure AI para automatización de documentación
Seguridad: OAuth 2.1, OIDC, mTLS cuando aplique
Componentes AI-native (búsqueda semántica, agentes LLM, automatización) son deseables pero no obligatorios.
Prácticas AI-Native y DX
• Uso de generación con LLM para documentación, changelogs, SDKs y código de ejemplo, con validaciones y gates de revisión.
• Provisión de búsqueda semántica (vector DB) en el portal para el descubrimiento de APIs.
• Disponibilidad de un asistente conversacional con prácticas gobernadas de LLMOps y audit logging.
Las competencias en IA pueden adquirirse durante el onboarding.
Requisitos
Requisitos Obligatorios
• Dominio de React, Next.js, TypeScript y Node.js.
• Experiencia con Apigee Developer Portal y generación de SDKs para APIs.
• Familiaridad con OpenAPI 3.1, contract testing y gestión de entornos sandbox.
• Experiencia integrando herramientas de documentación con LLM y asistentes ChatOps.
La solidez en front-end y API DevEx prevalece sobre la experiencia en IA.
Requisitos Deseables
• Experiencia en búsqueda semántica, vector stores y capacidades conversacionales para soporte a desarrolladores.
• Familiaridad con Camunda o Apache Camel para orquestación de flujos de tooling.
• Dominio de inglés y español (evaluado).
La experiencia en IA es valorable, pero no excluyente.
Sobre la Posición
Rol orientado a la propiedad end-to-end de la Developer Experience: facilitar la adopción, descubrimiento e integración de APIs mediante tooling avanzado, automatización inteligente y soporte conversacional.
FullStack Developer
11 de febr.Sopra Steria
Sevilla, ES
FullStack Developer
Sopra Steria · Sevilla, ES
Javascript Python Agile Linux Angular Git REST Oracle AWS SOAP PostgreSQL Vue.js RabbitMQ Terraform
Descripción de la empresa
Porque trabajar en Sopra Steria, también es sentir Sopra Steria.
Somos un reconocido líder europeo en consultoría, servicios digitales y desarrollo de software, con cerca de 56.000 empleados en casi 30 países y más de 4.000 en España.
Nos enfocamos en las personas, en su formación y desarrollo profesional, lo que nos impulsa a crecer y mejorar constantemente.
Tenemos pasión por lo digital y al igual que tú, buscamos la mejor de las aventuras. Queremos que tu día a día se convierta en la mejor de tus inspiraciones. Que aprendas, aportes, te diviertas, crezcas y que, sobre todo, disfrutes al máximo.
Si quieres formar parte de un equipo "Great Place to Work", ¡Sigue leyendo!
Descripción del empleo
Estamos buscando un/a FullStack Developer con experiencia en Python y Vue.JS para colaborar en el marco de proyectos de Aeroline.Formarás parte de un equipo multidisciplinar, con fuerte enfoque en la gestión compartida de datos y analítica y clara orientación a las metodologías de desarrollo ágil.
Entre las funciones del día a día se encuentran...
- Desarrollo / mantenimiento de funcionalidades backend (nuevos módulos, modelos de datos, APIs, scripts).
- Desarrollo / mantenimiento de funcionalidades frontend (interfaces graficas, nuevos componentes)
- Generación y mantenimiento de documentación.
- Generación y mantenimiento de tests.
- Soporte al usuario.
Requisitos
¿Qué necesitamos?
- Experiencia en lenguaje de programación Python, JavaScript.
- Frameworks Backend: Django/Flask, FastAPI.
- Frameworks Frontend: Vue.js/React, Angular.
- Diseño y manejo de APIs REST.
- Gestión de código: Git, GitHub.
- Bases de datos: ORACLE, PostgreSQL.
Valoramos positivamente si aportas...
- Nivel de inglés alto.
- Conocimientos y/o experiencia en metodología Agile.
- Metodologías y herramientas de Testing.
- Protocolos comunicación: SOAP.
- Conocimiento y manejo de sistemas: Linux.
- Sistemas de mensajeria: RabbitMQ, Celery.
- Servicios y arquitectura en la nube: Amazon Web Services (AWS), Terraform.
Información adicional
¿Qué tenemos para ti?
- Contrato indefinido y jornada completa
- Modelo presencial en Sevilla
- 23 días de vacaciones
- Formación continua: competencias técnicas, transversales y de idiomas. Contamos con acceso a certificaciones, formaciones de los principales Partners Tecnológicos, plataformas online y ¡mucho más!
- Seguro de vida y de accidentes
- Posibilidad de acogerte a nuestra retribución flexible (seguro médico, cheques guarderías, transporte, comida y formación)
- Acceso a Privilege Club, donde encontrarás descuentos interesantes en las principales marcas
- Onboarding personalizado y detallado. Te acompañamos en todo momento para que te sientas #soprano desde el primer momento.
- Oficina con espacios reservados al ocio. ¡Trabajo y diversión unido!
- Compañerismo y buen ambiente, el poder de la unión lo tenemos presente.
Y lo más importante...Tienes la posibilidad de desarrollar tu carrera profesional con nosotros: Crearemos juntos un plan de carrera personalizado. Te formarás, marcaremos objetivos y llevaremos a cabo un seguimiento para asegurarnos de que lo conseguimos juntos. Escuchamos tus prioridades y luchamos por ellas.
¡Aquí tu voz importa! ¡Únete a nosotros y sé parte de algo más!
The world is how we shape it
Adquirimos el compromiso de respetar la diversidad, creando un ambiente de trabajo inclusivo y aplicando políticas que favorezcan la inclusión y promuevan el respeto social y cultural en cuestiones de género, edad, funcional, orientación sexual y religión con igualdad de oportunidades.