¡No te pierdas nada!
Únete a la comunidad de wijobs y recibe por email las mejores ofertas de empleo
Nunca compartiremos tu email con nadie y no te vamos a enviar spam
Suscríbete AhoraInformática e IT
810Comercial y Ventas
761Transporte y Logística
571Adminstración y Secretariado
530Desarrollo de Software
367Ver más categorías
Derecho y Legal
363Comercio y Venta al Detalle
329Educación y Formación
291Marketing y Negocio
250Ingeniería y Mecánica
216Instalación y Mantenimiento
154Diseño y Usabilidad
141Sanidad y Salud
126Hostelería
113Publicidad y Comunicación
113Construcción
108Industria Manufacturera
102Arte, Moda y Diseño
65Recursos Humanos
64Contabilidad y Finanzas
52Atención al cliente
51Inmobiliaria
49Turismo y Entretenimiento
48Artes y Oficios
42Alimentación
35Banca
32Producto
31Cuidados y Servicios Personales
27Farmacéutica
27Energía y Minería
23Seguridad
15Social y Voluntariado
11Telecomunicaciones
3Deporte y Entrenamiento
2Ciencia e Investigación
1Agricultura
0Editorial y Medios
0Seguros
0Ingeniero/a DevOps
6 mar.GMV
Ingeniero/a DevOps
GMV · Madrid, ES
Teletrabajo Java Python Linux Docker Kubernetes DevOps PostgreSQL
¿Buscas un sitio innovador y consolidado en el que desarrollarte profesionalmente? GMV somos un grupo tecnológico con amplia experiencia. Estamos ampliando nuestros equipos en el sector espacio para asumir proyectos relacionados con Vigilancia y Seguimiento Espacial (SST) y Conocimiento del Dominio Espacial (SDA). Nos gusta ir al grano, te vamos a contar lo que no está en la red.
¿A QUÉ RETO TE VAS A ENFRENTAR?
En nuestro equipo trabajarás en el equipo multidisciplinar de SST/SDA de GMV, ubicado en varios países, con varios proyectos actualmente en curso. Estarás involucrado/a en las siguientes tareas:
- Automatización del ciclo de vida del desarrollo de software.
- Control de la configuración y versionado de productos de software.
- Apoyo a los desarrolladores para automatizar tareas repetitivas.
- Colaboración con otros miembros de un equipo distribuido geográficamente.
¿QUÉ NECESITAMOS EN NUESTRO EQUIPO?
Para este puesto, estamos buscando un/a Ingeniero/a DevOps, altamente motivado para sus actividades en las áreas de Vigilancia y Seguimiento Espacial (SST) y Conocimiento del Dominio Espacial (SDA), que quieran un entorno estable en el que desarrollarse, con trayectoria y experiencia en:
- Virtualización
- CI/CD pipelines
- Docker
- Python
- Linux
Buscamos ingenieros/as que aporten nuevas ideas, nuevas formas de trabajar y, sobre todo, pasión por los retos.
También valoraremos conocimientos en:
- Kubernetes
- Java
- PostgreSQL
- Security
¿QUÉ TE OFRECEMOS?
💻 Modelo de trabajo híbrido y 8 semanas al año de teletrabajo fuera de tu área geográfica habitual
🕑 Horario flexible de entrada y salida, y jornada intensiva viernes y verano.
🚀 Desarrollo de plan de carrera personalizado, formación y ayuda para el aprendizaje de idiomas.
🌍 Movilidad nacional e internacional. Te ofrecemos un relocation package.
💰 Retribución competitiva con revisiones continuas, retribución flexible y descuento en marcas.
💪 Programa Wellbeing: seguro médico, dental y de accidentes; fruta y café gratis, formación en salud física, mental y económica, y ¡mucho más!
⚠️ En nuestros procesos de selección siempre tendrás contacto telefónico y personal, presencial u online, con nuestro equipo de talent acquisition. Además, jamás se solicitarán transferencias ni tarjetas bancarias.
❤️ Promovemos la igualdad de oportunidades en la contratación comprometidos con la inclusión y la diversidad.
¿A QUE ESPERAS? ÚNETE
Senior DevOps Engineer
6 mar.AstraZeneca
Barcelona, ES
Senior DevOps Engineer
AstraZeneca · Barcelona, ES
Docker Cloud Coumputing Kubernetes TypeScript SaaS AWS Bash DevOps Kafka Machine Learning
Role based in Barcelona 3 days at office/ 2 days at home
We are seeking a passionate and experienced Senior DevOps Engineer to lead the transformation of our SaaS platform infrastructure and operations. Join us in leveraging cutting-edge technology, data, and AI to revolutionize life sciences and improve billions of lives globally. In this pivotal role, you will design, implement, and optimize robust cloud-based infrastructure and operational frameworks that enable rapid innovation and deliver exceptional system reliability. You will also guide and mentor team members, sharing your expertise in AWS CDK automation, Kubernetes, networking, and DevOps best practices.
Key Responsibilities
- Infrastructure Design & Management: Architect and manage scalable, multi-tenant AWSbased infrastructure using AWS CDK, ensuring modular and maintainable codebases.
- Kubernetes & EKS: Lead the deployment and management of Kubernetes clusters using Amazon EKS, implementing best practices for scalability and security.
- CI/CD Pipelines: Build, manage, and enhance automated CI/CD pipelines to ensure efficient, reliable deployments using tools like ArgoCD and GitHub Actions.
- IAM Role Management: Design, maintain, and optimize IAM roles, policies, and guardrails to ensure least privilege access across AWS resources.
- Networking: Architect and maintain AWS networking components such as VPCs, Transit Gateway, ALB, and Security Groups, ensuring robust security and performance.
- Security & Compliance: Implement DevSecOps best practices, including IAM security, encryption standards, and compliance with industry regulations (GXP, GDPR, HIPAA, NIST).
- AWS WAF & Firewall Policies: Design and implement firewall policies and AWS WAF configurations to protect applications from web threats.
- Automation: Lead efforts to automate infrastructure provisioning, application releases, and ETL workflows, reducing manual intervention and improving efficiency.
- Monitoring & Incident Response: Develop and implement comprehensive monitoring, logging, and alerting systems using OpenTelemetry, Prometheus, Grafana, AWS CloudWatch, and AWS CloudTrail.
- AWS EventBridge & CloudTrail: Utilize AWS EventBridge for event-driven automation and troubleshoot security and operational issues using AWS CloudTrail.
- Governance & Strategic Input: Drive governance processes, including security reviews, cost optimization, and operational consistency across the platform.
- AWS Control Tower & Multi-Account Management: Manage multiple AWS accounts using AWS Control Tower and best practices for account isolation.
- AI & Machine Learning: Exposure to AI tools and frameworks is a plus.
- Mentorship & Leadership: Mentor and guide junior and mid-level engineers, fostering a culture of learning and collaboration. Provide technical leadership in the adoption of AWS CDK and best practices for cloud automation.
- Collaboration: Partner with cross-functional teams, including product management and security, to align DevOps strategies with business goals and ensure cohesive development and operational workflows.
Required Experience & Qualifications
- Experience: 7+ years in DevOps or cloud infrastructure roles, with significant experience in SaaS and multi-tenant platforms. Proven track record of mentoring team members.
- Cloud Expertise: Expert knowledge of AWS services, including VPC, IAM, EC2, S3, RDS, Lambda, EKS, AWS WAF, AWS EventBridge, and AWS CloudTrail.
- Containerization & Orchestration: Deep proficiency in Docker, Kubernetes, Helm, and associated ecosystem tools.
- CI/CD Proficiency: Expertise in CI/CD tools such as ArgoCD and GitHub Actions.
- Infrastructure as Code (IaC): Advanced experience with AWS CDK (TypeScript preferred) and CloudFormation.
- Networking: Strong understanding of AWS networking services such as VPCs, Transit Gateway, ALB, and Security Groups.
- Security: In-depth knowledge of IAM, AWS KMS, encryption standards, AWS WAF, and security compliance frameworks including NIST.
- Monitoring & Alerting: Extensive experience with OpenTelemetry, Prometheus, Grafana, AWS CloudWatch, and AWS CloudTrail for monitoring and incident response.
- Data & ETL Pipelines: Familiarity with AWS Glue and Managed Kafka for real-time and batch data processing.
- Programming & Automation: Strong scripting and automation skills using TypeScript and Bash.
- Multi-Account AWS Management: Experience managing multiple AWS accounts with AWS Control Tower.
- Communication & Collaboration: Exceptional verbal and written communication skills, with the ability to explain complex technical concepts to diverse stakeholders.
Desired Experience & Qualifications
- Advanced expertise in AWS CDK, including building complex, reusable constructs and pipelines.
- Familiarity with Projen for automating CDK project configuration and management.
- Hands-on experience with Helm charts and Kubernetes manifests.
- Experience with monitoring and logging tools such as Prometheus, Grafana, and AWS CloudWatch. Exposure to multi-tenant SaaS platforms and best practices.
- Experience working with AI tools and frameworks.
Personal Attributes
- Mentor & Leader: Enjoys mentoring team members and fostering a collaborative, innovation-driven team culture.
- Organized & Adaptable: Able to manage multiple priorities and thrive in a fast-paced environment.
- Innovative: Passionate about leveraging technology to solve complex problems and drive efficiency.
- Customer-Focused: Dedicated to building infrastructure that delivers measurable business and customer value.
Work Arrangement:
This is an in-office role based in Barcelona, Spain, with a requirement to work a minimum of three days per week on-site.
Join Evinova and redefine healthcare with us. Apply now to be part of a team that´s transforming life sciences with technology, data, and innovation.
Date Posted
02-mar-2026
Closing Date
30-mar-2026
AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
CAS TRAINING
Madrid, ES
Ingeniero/a de Datos AWS Python / PowerCenter
CAS TRAINING · Madrid, ES
Python Agile AWS
En CAS Training, empresa líder con más de 20 años en consultoría tecnológica, outsourcing y formación especializada, buscamos incorporar un profesional con experiencia en AWS, Python y/o PowerCenter.
• Capacidad de generar Diseños Técnicos.
• Experiencia en la gestión del dato.
• Experiencia en proyectos E2E hasta implantación en PROD.
• Experiencia en comunicación con áreas de Arquitectura, Infraestructura, áreas de negocio.
• Experiencia en proyecto Agile.
Reuniones funcionales, aterrizar los requerimientos a tareas concretas, coordinación, ejecución de las mismas y seguimiento (tanto desde el punto de vista de avance, como para entender el valor que están aportando).
SECTOR BANCA UBICACIÓN: In Location - Madrid. híbrido con cierta flexibilidad y la ubicación. Las tablas en Madrid
Machine Learning Engineer
5 mar.Laia
San Sebastián de los Reyes, ES
Machine Learning Engineer
Laia · San Sebastián de los Reyes, ES
C# C++ C Aprendizaje automático Ciencia de datos Retoques fotográficos Inteligencia artificial Reconocimiento de patrones Ciencias de la computación Visión por computador Cloud Coumputing Machine Learning
WE’RE HIRING: ML ENGINEER (Computer Vision + C++ + Edge AI)
2–4 años experiencia - Perfil técnico con hambre de rendimiento
Si te obsesiona exprimir cada milisegundo de latencia…
Si C++ no te asusta…
Si te interesa llevar modelos de visión artificial del entrenamiento a producción real…
Sigue leyendo.
Estamos construyendo una arquitectura de Computer Vision de alto rendimiento. No buscamos un “researcher”. Buscamos un builder.
Lo que harás:
- Crear, Implementar y optimizar modelos de Computer Vision.
- Trabajar con C++ moderno y OpenCV en entornos reales.
- Integrar módulos de alto rendimiento.
- Mejorar tiempos de inferencia y reducir latencia en funciones serverless.
- Participar en despliegues reales de modelos en producción.
- Detectar cuellos de botella antes de que exploten.
Lo que buscamos (2–4 años de experiencia):
- Experiencia real con ML aplicado (visión artificial idealmente).
- Buen dominio de C++ (C++11 o superior).
- Experiencia con OpenCV y con tecnología Ambarella.
- Conocimiento de despliegue de modelos (TensorFlow, TF.js o similar).
- Mentalidad de optimización y Clean Code.
- Ganas de construir sistemas que escalen.
Valorable:
- WebAssembly.
- Concurrencia / multithreading.
- CUDA.
- Experiencia con Firebase, serverless o cloud functions.
Esto NO es:
- Un rol académico.
- Un puesto para hacer notebooks eternos.
- Un lugar donde el rendimiento “ya lo optimizaremos luego”.
Esto SÍ es:
- Producto real.
- Impacto directo.
- Responsabilidad técnica desde el día 1.
- Crecimiento acelerado hacia Senior.
- Código que vive en producción.
Si quieres estar en una startup donde la IA no es marketing sino core tecnológico…
- Y quieres crecer rápido trabajando con sistemas exigentes…
CAS TRAINING
Madrid, ES
Ingeniero/a DevOps (Data Azure)
CAS TRAINING · Madrid, ES
API Python Azure REST DevOps Terraform Spark
En CAS Training, empresa líder con más de 20 años en consultoría tecnológica, outsourcing y formación especializada, buscamos incorporar un/a Ingeniero/a DevOps especializado/a en entornos de Datos en Azure, con experiencia demostrable en el diseño, desarrollo y mantenimiento de pipelines CI/CD orientados al despliegue y gestión del ciclo de vida de soluciones de datos en la nube.
Necesitamos un perfil sólido tanto en DevOps como en Data, con capacidad para trabajar de forma autónoma dentro de equipos multidisciplinares de ingeniería de datos.
Requisitos mínimos
- Mínimo 3 años de experiencia como DevOps en entornos Azure.
- Experiencia sólida en CI/CD aplicado a soluciones de datos.
- Experiencia práctica en servicios gestionados de datos en Azure.
- Conocimientos avanzados de gobernanza y administración de Azure DevOps.
- Experiencia en automatización de despliegues en Databricks.
- Experiencia con herramientas IaC (Terraform o Bicep).
Requisitos valorables:
- Experiencia con Python y/o Spark para validación y testing de pipelines de datos.
- Experiencia en prácticas de monitorización y observabilidad.
Funciones:
- Diseño, desarrollo y mantenimiento de pipelines CI/CD en Azure DevOps.
- Automatización del despliegue y gestión del ciclo de vida de soluciones de datos en Azure.
- Automatización de despliegues de notebooks, jobs, clusters y políticas de Databricks mediante pipelines.
- Gestión de artefactos, librerías y configuraciones específicas de entornos de datos.
- Integración de CI/CD mediante Databricks CLI o API REST.
Se ofrece:
- Formar parte de un equipo dinámico altamente cualificado en una empresa en proceso de expansión.
- Participar en proyectos innovadores y punteros para grandes clientes de primer nivel en distintos sectores de mercado.
- Proyectos de larga duración, estabilidad profesional y progresión laboral.
- Contratación indefinida.
- Acceso gratuito al catálogo de formación anual de Cas Training.
- Salario negociable en base a la experiencia y valía del candidato/a.
Modalidad de trabajo: Híbrido en Madrid, Jaén, Sevilla, Granada, Córdoba
Data Engineer
3 mar.B. Braun Group
Barcelona, ES
Data Engineer
B. Braun Group · Barcelona, ES
. Python Agile TSQL Azure Cloud Coumputing DevOps Terraform Power BI
We are seeking a Data Engineer to join our team, focusing on building scalable and governed data products in a cloud data mesh architecture for the SAP Finance & Controlling domain.
This specialized role is paramount for designing, maintaining, and optimizing robust data pipelines and semantic models on our Azure-based Data Analytics Platform, leveraging Databricks and Microsoft Fabric. The ideal candidate combines strong technical proficiency in modern data engineering with the ability to translate finance and controlling business logic into governed, performant data models.
Experience with SAP FI/CO processes is preferred, as well as advanced skills in data modeling, Data Contracts, and cost/performance optimization. You will be instrumental in ensuring high data quality, governance, and availability for critical business intelligence and analytical dashboards. We are looking for a proactive, solution-oriented individual eager to contribute to a multidisciplinary, agile, and international environment.
Your Tasks in the Team
- Design, build, and operate data pipelines on Azure Data Factory and Databricks (PySpark/SQL, Delta Lake) using Azure DevOps for CI/CD.
- Apply advanced data modeling techniques (dimensional/star, data vault, normalized models) and implement Medallion architecture (Bronze/Silver/Gold).
- Define and enforce Data Contracts: schemas, SLAs/SLOs, versioning, and validation gates.
- Optimize Databricks workloads for performance and cost (partitioning, Z ORDER, caching, Photon, autoscaling, cluster policies).
- Standardize delivery with Databricks Asset Bundles and implement observability (job metrics, audit logs).
- Ensure compliance with governance, security, and regulatory requirements via Unity Catalog and RBAC/ABAC policies.
- Embed data quality frameworks, automated tests, and monitoring for pipeline health, SLA breaches, and anomaly detection.
- Collaborate closely with Finance stakeholders and domain engineers to ensure KPI sign-off and business alignment.
- Contribute to technical documentation, participate in code reviews, and drive continuous improvement.
- (Preferred) Build semantic models in Microsoft Fabric/Power BI aligned with curated data and governed KPIs.
- (Preferred) Translate SAP FI/CO business logic (GL, AP/AR, allocations, exchange rates) into reconciled semantic models.
- Strong experience with Microsoft Azure (ADLS Gen2, Data Factory, Key Vault) and foundational networking/security.
- Hands-on expertise in Databricks: PySpark, SQL, Delta Lake, Unity Catalog, Asset Bundles; performance tuning and cost optimization.
- Advanced data modeling skills: dimensional/star, data vault, semantic layers; optimization for query performance.
- Proficiency in Python and SQL for data processing; modular code and unit testing.
- Experience with Azure DevOps (Repos, Pipelines, approvals) and CI/CD strategies with rollback procedures.
- Knowledge of Data Contracts: schema definition, SLAs/SLOs, versioning, compatibility policies.
- Familiarity with event-driven architectures and real-time data streaming.
- Experience working in Agile/Scrum environments.
- Fluent in English (written and spoken).
- SAP FI/CO domain knowledge (GL, AP/AR, Asset Accounting, Cost Center Accounting, Internal Orders, CO PA).
- Microsoft Fabric / Power BI: semantic modeling, dataset governance, KPI standardization.
- Infrastructure as Code (Terraform for Azure & Databricks).
- Data Quality & Anomaly Detection frameworks (DLT expectations, Great Expectations).
- Cost governance: tagging, dashboards, budgets/alerts.
- Advanced modeling patterns: slowly changing dimensions, snapshotting, late-arriving facts.
- Security & Compliance: data masking, tokenization, PII minimization.
Mindrift
Freelance Machine Learning Engineer (Python)
Mindrift · Madrid, ES
Teletrabajo . Python Machine Learning
This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. Please submit your resume in English and indicate your level of English proficiency.
At Mindrift, innovation meets opportunity. We believe in using the power of collective intelligence to ethically shape the future of AI.
What We Do
The Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real-world expertise from across the globe.
About The Role
GenAI models are improving very quickly, and one of our goals is to make them capable of addressing specialized questions and achieving complex reasoning skills. If you join the platform as a Machine Learning expert, you'll have the opportunity to collaborate on these projects.
Although every project is unique, you might typically:
- Design original computational STEM problems that simulate real scientific workflows
- Create problems that require Python programming to solve
- Ensure problems are computationally intensive and cannot be solved manually within reasonable timeframes (days/weeks)
- Develop problems requiring non-trivial reasoning chains and creative problem-solving approaches
- Verify solutions using Python with standard libraries (numpy, pandas, scipy, sklearn)
- Document problem statements clearly and provide verified correct answers
Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you'll help shape the future of AI while ensuring technology benefits everyone.
Requirements
- You hold a Master's or PhD Degree in Computer Science, Mathematics, Physics, Engineering, or a similar STEM field
- You have at least 5 years of Machine Learning experience with proven business impact
- Strong programming skills in Python (numpy, pandas, scipy, sklearn)
- Solid understanding of numerical methods and computational algorithms
- Research or industry experience involving computational problem-solving
- Your level of English is advanced (C1) or above
- You are ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines
- Our freelance role is fully remote so, you just need a laptop, internet connection, time available and enthusiasm to take on a challenge
Why this freelance opportunity might be a great fit for you?
- Get paid for your expertise, with rates that can go up to $34/hour depending on your skills, experience, and project needs
- Take part in a part-time, remote, freelance project that fits around your primary professional or academic commitments
- Work on advanced AI projects and gain valuable experience that enhances your portfolio
- Influence how future AI models understand and communicate in your field of expertise
Senior DevOps Engineer (SRE)
3 mar.EPAM
Málaga, ES
Senior DevOps Engineer (SRE)
EPAM · Málaga, ES
C# Python TSQL Azure Linux Docker Cloud Coumputing Kubernetes Jira PowerShell Bash DevOps RabbitMQ Office
Do you have a background in systems engineering and strong experience in DevOps? Are you an open-minded professional with good English skills? If it sounds like you, this could be the perfect opportunity to join EPAM as a Senior DevOps Engineer.
EPAM is shaping the digital future for Fortune 1000 companies, building complex solutions using modern technologies. We are looking for a Senior DevOps Engineer with an open-minded personality, who can join our friendly environment and become a core contributor to our team of experts. The position requires being on shifts: early shift: 7am - 1pm UK time, late shift: 1pm - 7pm UK time (WFO is not expected during the days a person is on shift). When you are NOT on shifts, you´ll be required to work from office 4 days per week.
Responsibilities
- Provide support to a development team from the Ops perspective
- Configure Continuous Integration and Delivery pipelines
- Configure new and already existing environments
- Participate in discussions, influence outcomes, and make recommendations on feasibility and processes and architecture
- Devise/modify procedures to solve problems considering computer equipment capacity and limitations, operating time, and desired results
- Consult with users and develop business relationships and integrate activities with other IT departments to ensure successful implementation
- Monitor and report to management on the status of project efforts, anticipating/identifying issues that inhibit the attainment of project goals and implementing corrective actions
- Foster and maintain good relationships with customers and IT colleagues to meet expected customer service levels
Requirements
- University degree in Computer Science, Engineering, or a similar discipline
- Strong PowerShell or similar scripting experience like Python or Bash
- TeamCity (or other CI tools), BitBucket (or other source version control systems), Artifactory (or other artifacts management system)
- Solid understanding of SDLC, CI/CD
- Windows and Linux operating systems
- Understanding of business domain (Finance business preference)
- Solid understanding and experience working with high availability, high performance, multi-data center systems
- Troubleshoot complex issues ranging from system resources to application stack traces
- Extensive DevOps background with varied hands-on experience of supporting and releasing technical solutions for clients, preferably in a Microsoft environment
- Problem solving and analytical thinking skills
- Outstanding team working skills and ability to work autonomously
- Very good communication skills
Nice to have
- Cloud based computing
- Azure
- Docker
- Kubernetes
- Control-M (or other job workflow systems)
- JIRA or similar technologies
- SQL databases administering experience, RDBMS optimization skills
- Ability to read and debug C# (or similar programming language) at a basic level
- Some experience working with RESTful services, AMQP (RabbitMQ, AMPS)
- SAN arrays (Nimble)
- Virtualization technologies (VMWare)
We offer/Benefits
- Private health insurance
- EPAM Employees Stock Purchase Plan
- 100% paid sick leave
- Referral Program
- Professional certification
- Language courses
EPAM is a leading digital transformation services and product engineering company with 61,700+ EPAMers in 55+ countries and regions. Since 1993, our multidisciplinary teams have been helping make the future real for our clients and communities around the world. In 2018, we opened an office in Spain that quickly grew to over 1,450 EPAMers distributed between the offices in Málaga, Madrid and Cáceres as well as remotely across the country. Here you will collaborate with multinational teams, contribute to numerous innovative projects, and have an opportunity to learn and grow continuously.
- Why Join EPAM
- WORK AND LIFE BALANCE. Enjoy more of your personal time with flexible work options, 24 working days of annual leave and paid time off for numerous public holidays.
- CONTINUOUS LEARNING CULTURE. Craft your personal Career Development Plan to align with your learning objectives. Take advantage of internal training, mentorship, sponsored certifications and LinkedIn courses.
- CLEAR AND DIFFERENT CAREER PATHS. Grow in engineering or managerial direction to become a People Manager, in-depth technical specialist, Solution Architect, or Project/Delivery Manager.
- STRONG PROFESSIONAL COMMUNITY. Join a global EPAM community of highly skilled experts and connect with them to solve challenges, exchange ideas, share expertise and make friends.
Aubay
Barcelona, ES
AWS Cloud Engineer con inglés
Aubay · Barcelona, ES
Python Cloud Coumputing AWS Terraform
Funciones
- Diseñar y arquitectar infraestructuras AWS seguras y escalables.
- Implementar Infraestructura como Código (Terraform) y automatizar despliegues con GitLab-CI.
- Desarrollar y mantener código Python de alta calidad para soluciones cloud.
- Gestionar servicios AWS como Lambda, Fargate, S3, IAM, entre otros.
- Implementar y mantener arquitecturas serverless y containerizadas.
- Configurar y administrar soluciones SIEM (Splunk) y herramientas de seguridad AWS.
- Aplicar prácticas DevSecOps y seguridad en todo el ciclo de desarrollo.
- Configurar monitoreo, logging y alertas con CloudWatch, Prometheus, Grafana y PagerDuty.
Modalidad híbrida: 3-4 días en remoto + 1-2 días presenciales en nuestras oficinas (junto al metro Bogatell, Barcelona)
Requisitos
- Experiencia en ingeniería y arquitectura de soluciones de infraestructura AWS.
- Experiencia con: AWS Landing Zone, servicios de redes y seguridad de AWS, y estrategia multi-cuenta en AWS.
- Conocimiento de principios y diseño de Infraestructura como Código con Terraform.
- Experiencia con GitLab y GitLab-CI.
- Experiencia comprobada escribiendo código en Python.
- Profundo entendimiento de la infraestructura y servicios de AWS (Fargate, Lambda, S3, WAF, KMS, Transit Gateway, IAM, AWS Config)
- Experiencia con soluciones SIEM, idealmente Splunk.
- Experiencia con los siguientes conceptos: enfoque Shift-left y DevSecOps, SBOM, SAST, servicios de seguridad y cumplimiento de AWS (AWS Config, Inspector, Network Firewall, etc.).
- Experiencia en mejores prácticas de registro, monitoreo y alertas basadas en AWS Cloud y herramientas estándar (Splunk, CloudWatch Logs, Prometheus, Grafana, Alert Manager y PagerDuty)
- Inglés
PRUEBA TÉCNICA PARA EL PUESTO PREVIA A LA ENTREVISTA
Se ofrece
AUBAY seleccionamos un/a AWS Cloud Engineer con inglés en Barcelona.
Ofrecemos la posibilidad de formar parte de una Compañía en continuo crecimiento, participando en innovadores proyectos que te permitirán completar tu formación y potenciar tus capacidades. Valoramos el compromiso y la dedicación en el trabajo realizado.
En Aubay somos una multinacional de servicios digitales (DSC) fundada en 1998. Actualmente, con un fuerte crecimiento. Operamos en mercados con un alto valor agregado, tanto en Francia como en otras partes de Europa. En Aubay actualmente tenemos 5 000 personas trabajando.
Desde el asesoramiento hasta todo tipo de proyectos tecnológicos, acompañamos la transformación y modernización de los sistemas de información en todos los sectores, incluidos la industria, I + D, telecomunicaciones e infraestructura, y especialmente los principales bancos y compañías de seguros, que representan más del 80% de nuestra facturación francesa y el 65% de nuestra facturación europea.
Únete a nosotros, te esperamos!
#LI-LR1