¡No te pierdas nada!
Únete a la comunidad de wijobs y recibe por email las mejores ofertas de empleo
Nunca compartiremos tu email con nadie y no te vamos a enviar spam
Suscríbete AhoraComercial y Ventas
880Informática e IT
829Adminstración y Secretariado
694Transporte y Logística
649Educación y Formación
429Ver más categorías
Desarrollo de Software
405Comercio y Venta al Detalle
398Ingeniería y Mecánica
348Derecho y Legal
283Diseño y Usabilidad
281Marketing y Negocio
251Instalación y Mantenimiento
195Construcción
191Industria Manufacturera
186Publicidad y Comunicación
138Sanidad y Salud
136Hostelería
87Recursos Humanos
76Arte, Moda y Diseño
74Contabilidad y Finanzas
60Turismo y Entretenimiento
51Producto
46Cuidados y Servicios Personales
45Inmobiliaria
44Artes y Oficios
36Alimentación
26Atención al cliente
25Banca
18Seguridad
18Farmacéutica
14Energía y Minería
10Social y Voluntariado
6Seguros
4Deporte y Entrenamiento
2Telecomunicaciones
2Ciencia e Investigación
1Editorial y Medios
1Agricultura
0Data Engineer with Databricks
15 nov.EY
Málaga, ES
Data Engineer with Databricks
EY · Málaga, ES
Python Agile TSQL Azure Scrum Docker Cloud Coumputing Kubernetes AWS DevOps Kanban Big Data Office
Data Engineer with Databricks
Let us introduce you the job offer by EY GDS Spain - a member of the global integrated service delivery center network by EY.
The opportunity
For our office in Málaga, ee are looking a top-notch technology savvy specialists willing to move our projects on the new track! You will use the most advanced technology stack and have an opportunity to implement new solutions while working with top leaders in their industries. As a part of our global team you will participate in international projects mostly based on and implemented using major three cloud providers (Azure, GCP, AWS).
Your key responsibilities
As a Senior Data Engineer based in Málaga, you will be responsible for building data platforms that enable our clients and business partners to make efficient decisions with ease. You design robust solutions on state-of-the-art data engines such as Databricks.
Skills and attributes for success
Solving complex business problems on a large scale. Participating in cross-functional initiatives and collaborating across various domains. Interacting with engineers, product/project managers, and partners from all over the world.
To qualify for the role, you must have
- 3 - 5 years of experience in Data Engineering area including:
3 years of experience in Databricks, including services like data pipelines, Unity Catalog
3 years of experience in Big Data
- Proficiency in SQL, Python and pySpark
- Solid background in data warehousing, ETL, distributed data processing, software engineering components and data modeling concepts
- Analytical problem-solving skills, particularly those that apply to a big data environment
- Experience in working with structured, semi-structured and unstructured data
- Experience in at least one public cloud (Azure, AWS or GCP)
- Strong Experience in design techniques of relational databases and non-relational storage
- Solid experience with concepts such as Data Marts, Data Warehouses, Data Lakes, Data Mesh
- Excellent English communication skills (verbal and written), Spanish and/or other
- Experience in using Agile methodologies (Scrum, Kanban, etc.), DevOps and CI/CD principals
Ideally, you´ll also have
- Knowledge of data formats: Parquet, Iceberg, Avro
- Containerization: Docker, Kubernetes,
What we look for
You have a "Hold my beer - I got this" attitude with a reputation of a "go-to" person with all data-related topics.
What we offer
In EY GDS Spain, we´re committed to fostering a vibrant environment where every team member can thrive. We provide a space forcontinuous learningand theflexibility to define your own success, empowering you to make a meaningful impact in your own way. Ourdiverse and inclusive culturevalues who you are and encourages you to help others find their voice.
Additionally, here´s what makes us stand out:
- Empowering Career Development: Unlock your potential with tailored training and development programs designed to elevate your skills and propel your career forward. We invest in your growth because your success is our success.
- Flexible Work-Life Integration: Enjoy the freedom of our hybrid work model, allowing you to blend professional responsibilities with personal passions. We understand that life is more than just work, and we support you in achieving that balance.
- Comprehensive Well-Being Programs: Prioritize your health with our extensive wellness initiatives, including psychological support sessions and health resources. At EY GDS Spain, your well-being is at the heart of what we do.
- Meaningful Volunteering Opportunities: Make a difference in your community through our engaging volunteering programs. Join us in giving back and creating a positive impact while building connections with like-minded colleagues.
- Recognized Performance and Rewards: Celebrate your achievements with our recognition programs that honor both individual and team successes. We believe in acknowledging hard work and dedication, ensuring you feel valued every step of the way.
Join us at EY GDS Spain, where your journey is supported, your contributions are celebrated, and your future is bright.
Data Engineer (f/m/d)
14 nov.Axpo Group
Data Engineer (f/m/d)
Axpo Group · Madrid, ES
Teletrabajo Python Agile TSQL Azure Cloud Coumputing Power BI
Workload: 100%
Join Axpo´s Group IT to help shape the leading IT organization in the European energy sector. You´ll design and scale the data backbone behind our BI solutions-collaborating, innovating, and delivering real impact.
What you will do:
- Design, build, and maintain robust data pipelines with Databricks and Azure Data Factory across on-prem and cloud sources
- Lead and optimize ETL for our data lakehouse to ensure consistency, accuracy, and performance
- Partner with BI engineers to model data for reporting and enable seamless delivery into Power BI
- Improve scalability, performance, and reliability of large-scale data workflows
- Troubleshoot complex pipeline and platform issues with a hands-on, proactive approach
- Champion best practices for data quality, security, and governance with the central data platform team
- Mentor junior engineers and drive knowledge-sharing within the team
What you bring & who you are:
- Solid experience in data engineering with Databricks (Python, PySpark, SQL) and Azure Data Factory
- Proven track record designing and tuning pipelines for high-performance, large-scale environments
- Strong grasp of cloud platforms, data lake and data warehouse concepts
- Experience with GitHub or similar version control and CI/CD tooling
- Comfortable supporting BI topics; experience with Power BI is a plus
- Advanced English skills; German and/or Spanish would be an advantage
- Analytical mindset with a background in engineering, mathematics, business, or a similar field
- Ideally: experience in software engineering and working in agile teams
- Even if you don´t meet every requirement, we encourage you to apply-your potential matters
About the team: You´ll join a friendly, collaborative Business Intelligence team that values openness, learning, and impact. We work cross-functionally, share knowledge, and support diverse perspectives to achieve great results together.
At Axpo Group, we are dedicated to fostering a culture of non-discrimination, tolerance, and inclusion. As an equal opportunity employer, we welcome applications regardless of race and ethnicity, gender identity and expression, sexual orientation, age, disability, as well as socioeconomic, cultural, and religious background. We are committed to ensuring a respectful and inclusive recruiting process and workplace for everyone.
Benefits:
At our company, we strive to create a culture of continuous learning, personal growth, and inter- national community involvement. We´re passionate about providing our employees with the tools and resources they need to succeed, and we´re confident that you´ll love being part of our team!
- Working Hours
We offer flexible working hours to accommodate your work schedule. 60% on remote and 40% at our offices in Madrid, Torre Europa.
- Meal allowances
You can enjoy delicious meals on us, no matter if you are working remotely or on-site.
*Option to use it for public transportation or childcare instead.
- Internet Compensation
We cover the cost of your home internet connection, as we understand how essential connectivity is in the modern workplace.
- Microsoft ESI Certifications
Access to the ESI (Enterprise Skills Initiative) program certification, provides hands-on training for learning and enhancing technical skills and knowledge of Microsoft and Azure technologies.
- Training courses
Our company is committed to helping our employees grow and develop their skills, which is why we offer a variety of industry- specific training courses and a learning channel.
- Gym Coverage
Stay active and healthy with our 90% coverage benefit, which provides access to the nearby gym: Forus Selection to keep you energized throughout the day
- Health Insurance
We take the health and well-being of our employees seriously, which is why we offer a comprehensive health insurance plan and the option to extend it to your spouse and children.
At Axpo Group, we are dedicated to fostering a culture of non-discrimination, tolerance, and inclusion. As an equal opportunity employer, we welcome applications regardless of race and ethnicity, gender identity and expression, sexual orientation, age, disability, as well as socioeconomic, cultural, and religious background. We are committed to ensuring a respectful and inclusive recruiting process and workplace for everyone.
Department IT / Technology Role Permanent position Locations Madrid Remote status Hybrid
Junior Data Engineer
13 nov.COLIBRIX ONE
Barcelona, ES
Junior Data Engineer
COLIBRIX ONE · Barcelona, ES
. API Python TSQL Docker Cloud Coumputing REST AWS PostgreSQL Fintech
Join COLIBRIX ONE - Innovating the Future of Payments
At COLIBRIX ONE*, we're building advanced, AI-powered payment technologies that support Payment Service Providers (PSPs), Electronic Money Institutions (EMIs), and neobanks across the EU and the UK. As a fully licensed Electronic Money Institution (FCA Reference No. 927920) and holder of a Financial Institution Licence issued by the MFSA, as well as a principal member of both VISA and Mastercard, we provide comprehensive, real-world financial solutions that include:
- Global card processing
- Digital wallet infrastructure
- Cross-border merchant accounts
- Alternative payment methods (APMs)
- Corporate accounts for legal entities
At COLIBRIX ONE, your work directly powers the digital economy. If you're eager to solve meaningful challenges and build with purpose, we'd love to hear from you.
Role Overview
We are looking for a Junior Data Engineer to join our growing Data & Analytics team. This role is ideal for someone passionate about data engineering, automation, and cloud technologies. You'll play a key role in supporting the design, automation, and maintenance of ETL / ELT pipelines and ensuring the reliability and scalability of our data infrastructure on AWS.
Role Objective
Assist in the development, automation, and maintenance of ETL processes and data pipelines on AWS. Contribute to building and optimizing the data infrastructure, ensuring reliable data ingestion, transformation, and storage from various sources.
Key Responsibilities
- Design, automate, and optimize ETL / ELT pipelines using Python and Airflow.
- Work with AWS services (S3, RDS, ECS, Lambda, CloudWatch, Athena, MWAA) for data storage, processing, and monitoring.
- Collaborate with data analysts and cross-functional teams to understand data needs and deliver reliable datasets.
- Integrate and process data from multiple sources, including internal systems, APIs, and external files.
- Maintain documentation and contribute to the continuous improvement of data architecture and workflows.
Required:
- Degree in IT, Computer Science, or a related technical field.
- Hands-on experience with cloud platforms (preferably AWS) and understanding of cloud computing principles.
- Proficiency in Python (ETL scripting, libraries such as pandas, requests, Pydantic, boto3/AWS SDK).
- Basic understanding of data modelling and SQL (PostgreSQL preferred).
- Familiarity with ETL / ELT principles, Airflow orchestration, and Docker containerization.
- Experience with GitLab / GitHub for version control.
- Good command of English and Russian (written and spoken).
- Previous experience working with fintech or payments data.
- Hands-on use of AWS services (Athena, ECS, Lambda, RDS, CloudWatch).
- Experience with API integrations (REST / JSON).
- Opportunity to work in a modern data stack environment with AWS and Airflow.
- Supportive team culture focused on learning and professional growth.
- Competitive compensation package.
- Dynamic and innovative company within the fintech / payments industry.
- Employment will be offered through one of the group's legal entities - Mellifera Kartiera Ltd, Colibrix Ltd, or Mellifera Operations Ltd - depending on the role, location, and applicable legal framework
This position is offered within the COLIBRIX ONE. Employment will be under the appropriate legal entity based on the role and location.
Machine Learning Engineer
13 nov.EPAM
Madrid, ES
Machine Learning Engineer
EPAM · Madrid, ES
Python Azure Docker Cloud Coumputing Kubernetes DevOps Machine Learning
We are looking for a Machine Learning Engineer to join our team and drive the development of a scalable machine learning framework and tooling.
You will play a key role in enabling efficient collaboration between data scientists, data engineers and cloud architects. You´ll also help build GenAI-centric tools that improve the ML lifecycle through automation, optimization and observability.
RESPONSIBILITIES
- Design, build and maintain a robust framework to support machine learning projects at scale
- Act as a technical bridge between data science, engineering and cloud infrastructure teams
- Collaborate on the development and deployment of GenAI applications and agents such as LLM pipelines and image generation models
- Deploy models using containerized and serverless infrastructure such as Docker, Kubernetes and Azure Functions
REQUIREMENTS
- Proven experience in MLOps and DevOps practices across the ML lifecycle
- Hands-on experience with cloud platforms, especially Azure: Azure ML, Functions, Storage
- Familiarity with Orchestration of ML pipelines and experiments with MLOps tooling such as MLflow, Vertex AI, Azure Machine Learning, Databricks Workflows and SageMaker
- Solid understanding of model deployment using Docker, Kubernetes and serverless technologies
- Strong software engineering background: Python, CI/CD, testing frameworks
NICE TO HAVE- Experience with GenAI technologies such as Agentic workflows: LangChain, OpenAI tools, custom agents
- Working knowledge of the MCP server or similar scalable serving architectures
- Exposure to retrieval-augmented generation (RAG) or vector database integrations
- Experience working with infrastructure-as-code tools for deploying ML systems on the cloud
WE OFFER- Private health insurance
- EPAM Employees Stock Purchase Plan
- 100% paid sick leave
- Referral Program
- Professional certification
- Language courses
Data Engineer WebFOCUS
13 nov.CMV Consultores
Data Engineer WebFOCUS
CMV Consultores · Madrid, ES
Teletrabajo TSQL Oracle SQL Server
Desde CMV Consultores te brindamos las mejores oportunidades, en importantes clientes.
Buscamos un SENIOR DATA con experiencia sólida en desarrollo y administración de WebFOCUS, incluyendo diseño de informes, dashboards y mantenimiento de entornos productivos. Conocimiento profundo de las versiones 8 y 9, con experiencia específica en procesos de migración, compatibilidades y cambios de arquitectura. Conocimiento de Iway Dominio de DataMigrator / ETL, ReportCaster y Security Center de WebFOCUS. Experiencia integrando WebFOCUS con bases de datos relacionales (Oracle, SQL Server, etc.) y en optimización de consultas/reportes. Seria ideal que también tenga conocimientos de Qlik.
¿Qué se ofrece?
Contrato indefinido y salario competitivo según valía.
Proyecto a largo plazo
GMV
Madrid, ES
Ingeniero/a de Pruebas de Sistemas
GMV · Madrid, ES
Git QA
¿Buscas un sitio innovador y consolidado en el que desarrollarte profesionalmente? ¡En GMV tienes tu oportunidad perfecta! Estamos ampliando nuestros equipos en el sector Defensa para participar
en el desarrollo de productos de máxima seguridad aplicados a Cross Domain, donde somos referentes tanto a nivel nacional como internacional. Nos gusta ir al grano, te vamos a contar lo que no está en la red. Si quieres saber más sobre nosotros, accede a la web de GMV
¿A QUÉ RETO TE VAS A ENFRENTAR?
Podrás integrarte en el equipo de pruebas y participarás en las actividades de pruebas a nivel de
software y de sistema para productos de seguridad de red. Participarás en la ejecución de pruebas tanto manuales como automatizadas, en el desarrollo/ampliación y mantenimiento de los planes de pruebas y en el análisis y documentación de resultados. Además, intervendrás en la planificación y
documentación de los escenarios (reales y virtuales) de ejecución de pruebas.
¿QUÉ NECESITAMOS EN NUESTRO EQUIPO?
Para este puesto, estamos buscando ingenieros/as con conocimientos de técnicas de pruebas, ingeniería de sistemas y del software, así como experiencia en diseño y automatización de pruebas. Será relevante dominio de lenguajes de script, y herramientas de control de versiones (Subversion,
Git).Será valorable experiencia e interés en seguridad de la información, fundamentos en control de
configuración, certificaciones de QA, y despliegue de plataformas de red (físicas y virtuales).
¿QUÉ TE OFRECEMOS?
🕑 Horario intensivo tres días por semana
(08:00-15:00), siempre siendo uno de estos los viernes y todos
los días del verano (julio y agosto). 🚀 Desarrollo de plan de carrera personalizado y formación.🌍 Movilidad nacional e internacional. ¿Vienes de otro país? Te ofrecemos un relocation package. 💰 Retribución competitiva con revisiones
continuas 💪Programa Wellbeing: seguro médico,
dental y de accidentes; fruta y café gratis, formación en salud física,
mental y económica, y ¡mucho más!⚠️ En nuestros procesos de selección
siempre tendrás contacto telefónico y personal, presencial u online, con
nuestro equipo de talent acquisition. Además, jamás se solicitarán
transferencias ni tarjetas bancarias. Si contactan contigo siguiendo otro
proceso, escribe a nuestro equipo a la dirección de correo [email protected]❤️Promovemos la igualdad de oportunidades en la
contratación comprometidos con la inclusión y la diversidad
¿A QUE ESPERAS? ÚNETE
DevOps Engineer
11 nov.Bitpanda
Barcelona, ES
DevOps Engineer
Bitpanda · Barcelona, ES
API Docker Cloud Coumputing Kubernetes AWS DevOps
Who we are
We simplify wealth creation. Founded in 2014 in Vienna, Austria by Eric Demuth, Paul Klanschek and Christian Trummer, we´re here to help people trust themselves enough to build their financial freedom - for now and the future. Our user-friendly, trade-everything platform empowers both first-time investors and seasoned experts to invest in the cryptocurrencies, crypto indices, stocks*, precious metals and commodities* they want - with any sized budget, 24/7. Our global team works across different cultures and time zones, bringing our products to more than 6 million customers, making us one of Europe´s safest and most secure platforms that powers modern investing.
Headquartered in Austria but operating across Europe, our products are built by fast-moving, talented, "roll-up-your-sleeves-and-make-it-happen" kind of people. It´s these diverse perspectives and innovative minds operating as ONE TEAM that keep Bitpanda at the cutting edge of our industry. So if you´re someone who thinks big, moves fast and wants to make an impact right from day one, then get ready to join our industry-changing team. Let´s go!
Your Mission
Join Bitpanda´s dynamic team to drive the evolution of our cutting-edge infrastructure that shapes the future of finance with blockchain technology, ensuring seamless scalability and high performance. Empower millions of users by leveraging the latest technologies and your DevOps expertise to create innovative solutions. We have a motivated team of highly skilled, experienced engineers that can support and mentor you on your learning journey.
What You´ll Do
- Develop and enhance our infrastructure using AWS, Infrastructure as Code (Terraform/CloudFormation), and ArgoCD.
- Manage container orchestration platforms, such as AWS EKS and AWS ECS, and operate a cross-cluster Istio service mesh and API Gateway
- Monitor, upgrade, and automate systems to guarantee stability, security, and high performance using tools like Datadog, Opsgenie, and AI.
- Manage and deploy critical infrastructure components, including hybrid cloud (AWS Site-to-Site VPN, DirectConnect, Transit Gateway) and cloud-to-cloud (AWS PrivateLink) connections, ensuring seamless integration with cloud-based systems.
- Implement scalable solutions for complex technical challenges.
Who You Are
- Solid experience in AWS-based infrastructure (EC2, VPC, IAM, RDS, KMS, etc.).
- Minimum of 3 years of experience with Docker, Kubernetes, and Infrastructure as Code (Terraform/CloudFormation).
- Proficiency in CI/CD tools such as GitLab CI/CD, with some experience in ArgoCD.
- Strong experience in deploying and managing critical cloud infrastructure components.
- (Nice to Have) Experience with multi-cluster service mesh (Istio) and related capabilities, including Authorization Policies, Envoy Filters and rate-limit.
What´s in it for you
- Hybrid-working model with 25-Work From Anywhere days*
- Competitive total compensation package including participation in our stock option plan
- Market-leading benefits programs shaped by our Time & Flexibility policies*
- Company-wide and team events - both in-person and virtually!
- Bitpanda swag to keep you living the brand.
- Comprehensive onsite onboarding program at our Vienna HQ.
And, above all, the opportunity to learn and grow as part of Bitpanda´s incredible journey towards being Europe´s future #1 investment platform.
Bitpanda is committed to fostering a fair and equal environment based on trust and mutual respect. We believe that a diverse and inclusive workplace is paramount to our success and we are committed to building a team that represents a wide variety of backgrounds, perspectives, and skills.
* These benefits may be adjusted at Bitpanda´s discretion and do not apply to our internships and exceptions to our Hybrid Working policy apply to teams with shift schedules or for folks whose roles require them to be in-office (think: Workplaces team or IT).
BECA AHE SW Test Engineer
10 nov.Airbus
Madrid, ES
BECA AHE SW Test Engineer
Airbus · Madrid, ES
Software Integration and Verification & Validation: The ability to verify that the system´s requirements are correctly & completely implemented. The result of this activity may be required for qualification of the system in the frame of Customer acceptance or certification
.-Define interfaces between functions, or between systems and to manage their consistent implementation into system(s)
- Test Preparation, Execution, and Analyze for the functions of a System / Sub-system / Equipment / Component / Module.
The jobholder shall take the following main tasks (under direct coordination with ETZWM):
- Analysis of Problem Reports (PRs) and Engineering Change Requests (ECRs)
- Performance of integration test on the STB to verify the correct implementation of PR solutions and ECRs
- Changes / Adaptations of the Software Test Descriptions and Procedures
- Verification (i.e. inspections/walkthrough) of Test Descriptions and Procedures
- Performance of Formal Qualification Tests of the STB (for either Flight Clearance or Qualification purposes), including collection and analysis of Results.
- Changes / Adaptation of the Data Models in DUET and ODIN
- Generation of Test Documentation (STD, STR)
This job requires an awareness of any potential compliance risks and a commitment to act with integrity, as the foundation for the Company´s success, reputation and sustainable growth.
Company:
Airbus Helicopters España, SA
Employment Type:
Internship
Experience Level:
Student
Job Family:
Software Engineering
Senior DevOps Engineer
10 nov.EY
Madrid, ES
Senior DevOps Engineer
EY · Madrid, ES
API Python Azure Linux Cloud Coumputing Kubernetes AWS DevOps Terraform Docker Kafka
About Us
At EY wavespace Madrid - AI & Data Hub, we are a diverse, multicultural team at the forefront of technological innovation, working with cutting-edge technologies like Gen AI, data analytics, robotics, etc. Our center is dedicated to exploring the future of AI and Data.
Overview:
We´re looking for a Senior DevOps Engineer to build and run cloud and AI infrastructure at scale. You´ll own IaC with Terraform, CI/CD, Kubernetes, and Linux. You´ll also help run LLM workloads both in Azure and locally (Ollama/vLLM/llama.cpp). Your work will enable fast, secure, repeatable delivery.
Key responsibilities
- Build and maintain Azure infrastructure with Terraform (modules, workspaces, pipelines, policies).
- Design and operate CI/CD with GitHub Actions and/or Azure DevOps (multi-stage, approvals, environments).
- Run containers and Kubernetes/AKS (Helm, ingress, autoscaling, node pools, storage).
- Manage AI/LLM runtime: local model runners (Ollama, vLLM, llama.cpp), GPU/CPU configs.
- Support RAG: embeddings pipelines, vector DBs (Azure AI Search/Cognitive Search, pgvector, Milvus), data sync, retention.
- Automate platform tasks with Python (tooling, CLI utilities, API glue, ops scripts).
- Implement observability (Azure Monitor, Prometheus/Grafana, logs/traces/metrics, alerts, runbooks, SLOs).
- Apply Zero Trust security; Enforce least privilege and role-based access control (RBAC), Identity-based segmentation (Azure AD, Conditional Access, MFA).
- Implement policy-as-code (OPA, Azure Policy) for compliance.
- Rotate secrets and certificates via Key Vault; integrate with pipelines.
- Add continuous security scanning (SAST/DAST, container image scanning).
- Handle reliability: rollout strategies, health probes, incident response, postmortems.
- Optimize costs: right-sizing, autoscaling, budgets, tags, reporting.
Key requirements:
- 4+ years in DevOps/SRE/Platform Engineering.
- Strong Linux (shell, systemd, networking, performance troubleshooting).
- Terraform at scale (modules, state backends, CI/CD integration).
- Deep Azure experience (AKS, VNets, Key Vault, Storage, Monitor, Identity, Networking).
- CI/CD expertise (GitHub Actions and/or Azure DevOps).
- Containers and Kubernetes in production.
- Python or scripting for automation (solid scripting and tooling; not full-time app dev).
- Hands-on with LLM setups (local runners or Azure OpenAI), embeddings, vector indexes, and RAG basics.
Nice to have
- Multi-cloud exposure (AWS / GCP).
- Azure AI services (Azure OpenAI, Cognitive Search).
- GitOps (Argo CD/Flux), Helm packaging, OCI registries.
- Eventing/queues (Event Grid, Service Bus, Kafka).
- Security/compliance in cloud (CIS, NIST, Microsoft CAF).
- Certifications: AZ-104, AZ-204, AZ-400, AI-900, HashiCorp Terraform Associate, CKA/CKAD.
- Experience with GPU nodes, drivers, CUDA/ROCm, or CPU-only optimizations for LLMs.
How we work
- Everything as code. PRs, reviews, and tests.
- Small batches. Trunk-based or short-lived branches.
- Clear runbooks and on-call rotation where needed.
- Measure, alert, fix, and improve.
Our commitment to diversity & inclusion
We are genuinely passionate about inclusion and we support individuals of all groups; we do not discriminate on the basis of race, religion, gender, sexual orientation, or disability status.