No et perdis res!
Uneix-te a la comunitat de wijobs i rep per email les millors ofertes d'ocupació
Mai no compartirem el teu email amb ningú i no t'enviarem correu brossa
Subscriu-te araInformàtica i IT
161Comercial i Vendes
155Transport i Logística
94Desenvolupament de Programari
93Dret i Legal
71Veure més categories
Màrqueting i Negoci
69Educació i Formació
68Administració i Secretariat
60Comerç i Venda al Detall
45Disseny i Usabilitat
41Enginyeria i Mecànica
35Instal·lació i Manteniment
26Publicitat i Comunicació
24Recursos Humans
20Sanitat i Salut
20Immobiliària
19Atenció al client
18Art, Moda i Disseny
17Hostaleria
16Construcció
15Producte
15Indústria Manufacturera
12Comptabilitat i Finances
10Arts i Oficis
8Turisme i Entreteniment
7Energia i Mineria
5Farmacèutica
5Cures i Serveis Personals
3Banca
2Seguretat
2Social i Voluntariat
2Alimentació
1Agricultura
0Assegurances
0Ciència i Investigació
0Editorial i Mitjans
0Esport i Entrenament
0Telecomunicacions
0Top Zones
Barcelona
840papernest
Junior Data Engineer: Cloud & DevOps - Barcelona
papernest · Barcelona, ES
Teletreball . Python TSQL OOP Cloud Coumputing AWS DevOps Terraform
This year marks 10 years since we launched the idea that simplifying our customers' lives is possible by offering an innovative solution that allows them to easily subscribe to, manage, and switch all types of contracts through a unique and intuitive platform.
In that time, we have supported more than 2 millions customers in France, Spain, and Italy, while investing in new verticals and positioning ourselves as a highly efficient, innovative, and competitive scale-up in a rapidly growing market.
With over 900 employees across 3 locations, we are solidifying our position as a market leader in Europe. We are always on the lookout for talent ready to join a dedicated and motivated team driven by a meaningful project. Working with us means embracing a culture of excellence, innovation, and real impact.
We are looking for a Junior Data Engineer, with a Cloud & DevOps orientation. This role is for the engineer who loves the "Engine" part of Data Engineering. You will build the technical foundation that allows our data to flow. You will focus on the "how"—ensuring our infrastructure is automated, our CI/CD is fast, and our data platform is ready for the next generation of AI-driven automation.
Infrastructure as Code: Assist in evolving our stack (Python/Airflow/Docker) hosted on AWS.
DevOps for Data: Maintain and improve our CI/CD pipelines to ensure data deployments are seamless.
OOP Excellence: Build reusable Python modules that standardise how we handle data across the organization.
AI Enablement: Partner with the team to provide the infrastructure needed for AI/ML experimentation.
Ideally a strong school background in Software Development.
Tech: EXCELLENT Python (OOP) and SQL.
The Edge: A "Big Plus" for initial experience with AWS or Terraform.
Thrive in an international and inclusive environment: everyone has a place at papernest. With over 46 different nationalities, it’s not uncommon here to start a sentence in English and finish it en français or en español ¡
💸 Compensation: a plan for Subscription Warrants for Company Creators (BSPCE) in accordance with company regulations, as well as a Pluxee card to manage your tax level through a voluntary compensation system across different services (transportation, dining, and childcare).
🏆 Benefits: as a home insurance provider and a supplier of green electricity and gas, we offer attractive deals to our employees. After all, there’s no reason why things should only be simpler for our customers!
🩺 Health: medical insurance through Alan or Sanitas to manage your healthcare expenses in an ultra-simple, paperless way, with up to 50% coverage by papernest (after 6 months in the company).
🍽️ Meals & partnerships: a healthy breakfast offered every Tuesday, as well as partnerships with various services in Barcelona (restaurants, sports, leisure, and care centers).
📚 Training: the development of our employees is essential. You will have access to ongoing training tailored to your goals, whether it involves technical, language, or managerial skills.
📈 Career Development: numerous opportunities are available for you to grow, whether by deepening your expertise or exploring new paths. We support you in your professional ambitions.
✨ Remote Work: enjoy 2 days of remote work per week to optimize your focus and efficiency.
Hiring process:1st call with Talent Acquisition
Interview with a team member
Technical Case
Interview with Alex - Head of Data Engineering
Interested in this challenge? 🙂
Don’t hesitate any longer—we look forward to meeting you! Regardless of your age, gender, background, religion, sexual orientation, or disability, there’s a place for you with us. Our selection processes are designed to be inclusive, and our work environment is adapted to everyone’s needs.
We particularly encourage applications from women. Even if you feel that you don’t meet all the criteria outlined in this job posting, please know that every application is valuable. We strongly believe that diverse and varied backgrounds enrich our team, and we will carefully consider your application. Parity and diversity are essential assets to our success.
Principal Cloud Engineering Architect
7 de gen.AstraZeneca
Barcelona, ES
Principal Cloud Engineering Architect
AstraZeneca · Barcelona, ES
Cloud Coumputing Kubernetes AWS DevOps Terraform Docker Office
Job Title: Principal Cloud Engineering Architect Evinova
Introduction to role:
Are you ready to connect strategy with cloud engineering to accelerate how life-changing medicines reach patients with Evinvoa? Do you thrive on shaping reusable patterns that scale securely across an enterprise while mentoring engineers to do their best work?
In this principal architect role, you will set the blueprint for how we build and operate multi-tenant platforms on AWS. You will guide developer teams in Barcelona, align DevOps with business goals, and embed security and compliance into everything we deliver. Your impact will be felt in faster, safer product delivery and technology choices that stand up to the demands of a global, highly regulated environment.
You will bring a well-rounded attitude, joining dots between future technology trends, business needs, solution development and ways of working. Working across functions, you will help change how teams collaborate, standardize, and innovate, creating cloud foundations that enable our science and operations to move at pace.
Accountabilities:
Technical Leadership and Mentorship: Provide hands-on guidance to developer teams in Barcelona, ensuring their technology needs are met while adhering to enterprise standards. Foster a culture of learning and collaboration and lead the adoption of AWS CDK and cloud automation standard methodologies.
Scalable Multi-tenant Architecture: Architect and manage AWS infrastructure using AWS CDK, designing modular, maintainable codebases and building multi-tenant platforms that scale reliably across products and regions.
Reusable Patterns and Standards: Develop and promote reusable cloud architecture patterns to accelerate best-practice adoption across the enterprise, reducing duplication and improving consistency and speed.
Multi-functional Alignment: Partner with product management, security and other partners to align DevOps strategies with business outcomes, ensuring cohesive development and operational workflows.
Embedded Security and Compliance: Implement DevSecOps best practices, including IAM security, encryption standards, and compliance with GXP, GDPR, HIPAA and NIST, so platforms are secure by design and audit-ready.
Innovation and Continuous Improvement: Continuously evaluate emerging technologies and methodologies, recommending pragmatic improvements that keep our cloud architecture innovative, efficient and fit for purpose.
Risk Management: Identify risks associated with technology decisions and design mitigation strategies that balance speed, safety and business priorities.
Essential Skills/Experience:
• Proven experience providing technical leadership and guidance to developer teams, including distributed teams in Barcelona.
• Deep expertise in AWS Cloud Development Kit (CDK) and standard methodologies for cloud automation.
• Track record architecting and managing scalable, multi-tenant AWS infrastructure using AWS CDK, with modular and maintainable codebases.
• Ability to develop and promote reusable cloud architecture patterns to accelerate best-practice adoption across an enterprise.
• Experience partnering with product management and security to align DevOps strategies with business goals and ensure cohesive workflows.
• Hands-on implementation of DevSecOps guidelines including IAM security, encryption standards, and compliance with GXP, GDPR, HIPAA and NIST.
• Experience evaluating emerging technologies and methodologies to recommend improvements to the technology stack.
• Strength in identifying technology risks and developing mitigation strategies aligned with business goals.
Desirable Skills/Experience:
• Experience with containers, Kubernetes and serverless architectures on AWS.
• Expertise in multi-account governance, landing zones, and infrastructure-as-code patterns beyond CDK (e.g., Terraform).
• Background working in global, highly regulated environments with audit readiness.
• Ability to drive cost optimization, reliability engineering and observability practices at scale.
• Influence and leadership in communities of practice to set and evolve enterprise standards.
• Fluency in English; Spanish language skills to support collaboration with Barcelona teams.
• Prior experience mentoring architects and shaping cloud architecture roadmaps.
When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That´s why we work, on average, a minimum of three days per week from the office. But that doesn´t mean we´re not flexible. We balance the expectation of being in the office while respecting individual flexibility.
Join us in our unique and ambitious world.
Senior Data engineer
7 de gen.Krell Consulting & Training
Barcelona, ES
Senior Data engineer
Krell Consulting & Training · Barcelona, ES
Descripción
🧠 Senior Data engineer
📍 Ubicación: Barcelona – Viladecans (Área Metropolitana)
🏢 Modalidad: híbrido (2 días/semana en oficinas del cliente)
🏭 Sector: Industria
📝 Descripción del puesto
Buscamos un/a Senior Data Scientist especializado/a en Optimización para incorporarse a un proyecto del sector industrial de alta complejidad. La persona seleccionada participará en el diseño e implementación de soluciones avanzadas de optimización, transformando problemas matemáticos complejos en algoritmos con impacto directo en negocio.
⚙️ Responsabilidades
Diseño y desarrollo de modelos de optimización basados en reglas de negocio y restricciones complejas.
Aplicación de técnicas metaheurísticas para la resolución de problemas de optimización.
Traducción de requerimientos de negocio a modelos matemáticos y algoritmos eficientes.
Colaboración con equipos técnicos y de negocio en entornos productivos.
papernest
Junior Data Engineer: Data Flow & Architecture
papernest · Barcelona, ES
Teletreball . Python TSQL OOP SaaS
This year marks 10 years since we launched the idea that simplifying our customers' lives is possible by offering an innovative solution that allows them to easily subscribe to, manage, and switch all types of contracts through a unique and intuitive platform.
In that time, we have supported more than 2 millions customers in France, Spain, and Italy, while investing in new verticals and positioning ourselves as a highly efficient, innovative, and competitive scale-up in a rapidly growing market.
With over 900 employees across 3 locations, we are solidifying our position as a market leader in Europe. We are always on the lookout for talent ready to join a dedicated and motivated team driven by a meaningful project. Working with us means embracing a culture of excellence, innovation, and real impact.
As a Junior Data Engineer you will be the guardian of data quality and lineage. You’ll be assigned to a squad where the complexity of data flows is high. You won't just move data; you will design the logic that ensures our BigQuery data lake remains a "Single Source of Truth."
Advanced ETL/ELT: Design and implement data processing flows using Python and Airflow.
Data Lineage: Help develop tools that track data from source to destination, ensuring transparency for all users.
Reporting & Quality: Perform daily reporting on the health of customer and internal data flows.
Custom Tooling: Build internal Data Engineering tools to replace manual tasks—no SaaS "black boxes" here.
Engineering school background in Software Development.
Tech: Mastery of Python (OOP) and strong SQL (BigQuery is a plus).
The Edge: You love designing complex systems and have a high attention to detail regarding data integrity.
Thrive in an international and inclusive environment: everyone has a place at papernest. With over 46 different nationalities, it’s not uncommon here to start a sentence in English and finish it en français or en español ¡
💸 Compensation: a plan for Subscription Warrants for Company Creators (BSPCE) in accordance with company regulations, as well as a Pluxee card to manage your tax level through a voluntary compensation system across different services (transportation, dining, and childcare).
🏆 Benefits: as a home insurance provider and a supplier of green electricity and gas, we offer attractive deals to our employees. After all, there’s no reason why things should only be simpler for our customers!
🩺 Health: medical insurance through Alan or Sanitas to manage your healthcare expenses in an ultra-simple, paperless way, with up to 50% coverage by papernest (after 6 months in the company).
🍽️ Meals & partnerships: a healthy breakfast offered every Tuesday, as well as partnerships with various services in Barcelona (restaurants, sports, leisure, and care centers).
📚 Training: the development of our employees is essential. You will have access to ongoing training tailored to your goals, whether it involves technical, language, or managerial skills.
📈 Career Development: numerous opportunities are available for you to grow, whether by deepening your expertise or exploring new paths. We support you in your professional ambitions.
✨ Remote Work: enjoy 2 days of remote work per week to optimize your focus and efficiency.
Hiring process:1st call with Talent Acquisition
Interview with a team member
Technical Case
Interview with Alex - Head of Data Engineering
Interested in this challenge? 🙂
Don’t hesitate any longer—we look forward to meeting you! Regardless of your age, gender, background, religion, sexual orientation, or disability, there’s a place for you with us. Our selection processes are designed to be inclusive, and our work environment is adapted to everyone’s needs.
We particularly encourage applications from women. Even if you feel that you don’t meet all the criteria outlined in this job posting, please know that every application is valuable. We strongly believe that diverse and varied backgrounds enrich our team, and we will carefully consider your application. Parity and diversity are essential assets to our success.
Senior Data Engineer (Data Operations Team)
29 de des.Semrush
Barcelona, ES
Senior Data Engineer (Data Operations Team)
Semrush · Barcelona, ES
. Python TSQL Docker Cloud Coumputing Kubernetes REST SaaS Terraform Office
Hi there!
We are Semrush, a global Tech company developing our own product – a platform for digital marketers.
Are you ready to be a part of it? This is your chance! We’re hiring for Senior Data Engineer (Data Operations Team).
Tasks in the role
General Overview
- Our data ecosystem is built on self-hosted Airflow & dbt Core, along with multiple BigQuery instances.
- The current setup was built several years ago and has become highly customized.
- While this customization supports flexibility, it now limits development speed and reduces analytics efficiency.
- We’re looking for a highly technical expert who can redesign, simplify, and standardize our DWH infrastructure.
- The focus is more on stabilizing and improving the system rather than pure feature development.
- Identify and carefully resolve infrastructure inefficiencies
- Conduct audits of existing infrastructure and propose improvements
- Oversee infrastructure health, performance, and cost efficiency
- Evaluate architecture proposals from peers and provide feedback
- Make key architectural proposals
- Develop and deploy IaC using Terraform
- Create and maintain CI/CD pipelines in GitLab
- Design, build, and optimize data pipelines using BigQuery, Airflow & dbt
- Monitor and troubleshoot cloud infrastructure, pipelines, and workflows Support the development and maintenance of ML/AI tools and workflows
- Conduct code reviews for merge requests
Hard Skills
- Proficient in System Design & Architecture
- Strong expertise in Airflow management
- Strong expertise in dbt management
- Proficient in IaC tools (Terraform)
- Proficient in CI/CD tools (GitLab)
- Advanced knowledge of SQL
- Proficient in Python
- Experienced in Monitoring & Alerting (Grafana)
- Experience with Containers (Docker, Kubernetes)
- Strong project management skills across the full delivery lifecycle: from requirement gathering and decomposition to roadmapping, prioritization, execution, and delivery
- Proactive and autonomous, able to make efficient decisions with minimal supervision
- Strategic and structured thinker
- Excellent problem-solving skills and attention to detail
- Strong communication and stakeholder management skills, with ability to build reliable partnerships
- Flexible working hours
- Unlimited PTO
- Flexi Benefit for your hobby
- Employee Support Program
- Loss of family member financial aid
- Employee Resource Groups
- Meals, snacks, and drinks at the office
- Corporate events
- Teambuilding
- Training, courses, conferences
Semrush is a leading online visibility management SaaS platform that enables businesses globally to run search engine optimization, pay-per-click, content, social media and competitive research campaigns and get measurable results from online marketing.
We've been developing our product for 17 years and have been awarded G2's Top 100 Software Products, Global and US Search Awards 2021, Great Place to Work Certification, Deloitte Technology Fast 500 and many more. In March 2021 Semrush went public and started trading on the NYSE with the SEMR ticker.
10,000,000+ users in America, Europe, Asia, and Australia have already tried Semrush, and over 1,700 people around the world are working on its development. The Semrush team is constantly growing.
Our Diversity, Equity, and Inclusion commitments
Semrush is an equal opportunity employer. Building a better future for marketers around the world unites people from all backgrounds. Even if you feel that you don’t 100% match all requirements, don’t be discouraged to apply! We are committed to ensure that everyone feels a sense of belonging in the workplace.
We do not discriminate based upon race, religion, creed, color, national origin, sex, pregnancy, sexual orientation, gender identity, gender expression, age, ancestry, physical or mental disability, or medical condition including medical characteristics, genetic identity, marital status, military service, or any other classification protected by applicable local, state or federal laws.
Our new colleague, we are waiting for you!
Anastasiia Bruk
Talent Acquisition Partner
Canonical
Embedded & Desktop Linux Systems Engineer - Optimisation
Canonical · Barcelona, ES
Teletreball . Linux C++ Cloud Coumputing IoT
Work across the full Linux stack from kernel through GUI to optimise Ubuntu, the world's most widely used Linux desktop and server, for the latest silicon.
The role is a fast-paced, problem-solving role that's challenging yet very exciting. The right candidate must be resourceful, articulate, and able to deliver on a wide variety of solutions across PC and IoT technologies. Our teams partner with specialist engineers from major silicon companies to integrate next-generation features and performance enhancements for upcoming hardware.
Location: This is a Globally remote role
What your day will look like
- Design and implement the best Ubuntu integration for the latest IoT and server-class hardware platforms and software stacks
- Work with partners to deliver a delightful, optimised, first class Ubuntu experience on their platforms
- Take a holistic approach to the Ubuntu experience on partner platforms with inputs on technical plans, testing strategy, quality metrics
- Participate as technical lead on complex customer engagements involving complete system architectures from cloud to edge
- Help our customers integrate their apps, SDKs, build device OS images, optimize applications with Ubuntu Core, Desktop and Server
- Work with the most advanced operating systems and application technologies available in the enterprise world.
What we are looking for in you
- You love technology and working with brilliant people
- You have a Bachelor's degree in Computer Science, STEM or similar
- You have experience with Linux packaging (Debian, RPM, Yocto)
- You have experience working with open source communities and licences
- You have experience working with C, C++
- You can work in a globally distributed team through self-discipline and self-motivation.
- Experience with graphics stacks
- Good understanding of networking - TCP/IP, DHCP, HTTP/REST
- Basic understanding of security best practices in IoT or server environments
- Good communication skills, ideally public speaking experience
- IoT / Embedded experience – from board and SoC, BMCs, bootloaders and firmware to OS, through apps and services
- Some experience with Docker/OCI containers/K8s
Your base pay will depend on various factors including your geographical location, level of experience, knowledge and skills. In addition to the benefits above, certain roles are also eligible for additional benefits and rewards including annual bonuses and sales incentives based on revenue or utilisation. Our compensation philosophy is to ensure equity right across our global workforce.
In addition to a competitive base pay, we provide all team members with additional benefits, which reflect our values and ideals. Please note that additional benefits may apply depending on the work location and, for more information on these, you can ask in the later stages of the recruitment process.
- Fully remote working environment - we've been working remotely since 2004!
- Personal learning and development budget of 2,000USD per annum
- Annual compensation review
- Recognition rewards
- Annual holiday leave
- Parental Leave
- Employee Assistance Programme
- Opportunity to travel to new locations to meet colleagues at 'sprints'
- Priority Pass for travel and travel upgrades for long haul company events
Canonical is a pioneering tech firm that is at the forefront of the global move to open source. As the company that publishes Ubuntu, one of the most important open source projects and the platform for AI, IoT and the cloud, we are changing the world on a daily basis. We recruit on a global basis and set a very high standard for people joining the company. We expect excellence - in order to succeed, we need to be the best at what we do.
Canonical has been a remote-first company since its inception in 2004. Work at Canonical is a step into the future, and will challenge you to think differently, work smarter, learn new skills, and raise your game. Canonical provides a unique window into the world of 21st-century digital business.
Canonical is an equal opportunity employer
We are proud to foster a workplace free from discrimination. Diversity of experience, perspectives, and background create a better work environment and better products. Whatever your identity, we will give your application fair consideration.
Data Engineer
29 de des.Boehringer Ingelheim
Sant Cugat del Vallès, ES
Data Engineer
Boehringer Ingelheim · Sant Cugat del Vallès, ES
. Python TSQL NoSQL Cloud Coumputing Scala Hadoop AWS Kafka Spark Big Data Power BI Tableau
We’re looking for a Data Engineer to evolve our data infrastructure, optimize data flows, and guarantee data availability and quality. You will partner closely with data scientists and analysts to keep a consistent, scalable data delivery architecture across all ongoing projects.
Responsibilities
- Design, build, install, test, and maintain highly scalable data management systems
- Ensure solutions meet business requirements and industry best practices
- Integrate/re-engineer emerging data-management and software-engineering technologies into existing data stacks
- Define and document standardized processes for data mining, data modeling, and data production
- Use a variety of languages and tools to stitch systems together (e.g., Python, SQL)
- Recommend improvements to increase data reliability, efficiency, and quality
- Collaborate with data architects, modelers, and IT teams to align on project goals
- Bachelor’s/Master’s degree in Computer Science, Engineering, or related field or equivalent proven experience as a Data Engineer, Software Developer, or similar role
- Proficiency with the Apache ecosystem (Parquet, Hadoop, Spark, Kafka, Airflow)
- Strong hands-on experience with AWS data services (Amazon Redshift, Kinesis, Glue, S3)
- Demonstrated experience building and optimizing big-data pipelines, architectures, and datasets
- Strong analytical skills working with unstructured datasets
- Experience with relational SQL and NoSQL databases, preferably Snowflake and/or Databricks
- Familiarity with data pipeline and workflow orchestration tools
- Strong project management and organizational skills
- Excellent written and verbal communication skills
- Snaplogic knowledge is a plus
- Proficiency in scripting languages such as Python or Scala
- Familiarity with data visualization tools (e.g., Tableau, Power BI, QuickSight)
- AWS Cloud Practitioner, Architecture, Big Data
With us, you can grow, collaborate, innovate, and improve lives. We offer challenges in a global, respectful, and family-like work environment where ideas drive our innovative mindset. Flexible learning and continuous development for our team are key because your growth is our growth.
At Boehringer Ingelheim, gender equality is one of our top priorities. We not only comply with current regulations but also strive to promote it in all areas of our organization, as established in our III Equality Plan. We are committed to creating an inclusive and equitable work environment for everyone!
Our Company
Why Boehringer Ingelheim?
With us, you can develop your own path in a company with a culture that knows our differences are our strengths - and break new ground in the drive to make millions of lives better.
Here, your development is our priority. Supporting you to build a career as part of a workplace that is independent, authentic and bold, while tackling challenging work in a respectful and friendly environment where everyone is valued and welcomed.
Alongside, you have access to programs and groups that ensure your health and wellbeing are looked after - as we make major investments to drive global accessibility to healthcare. By being part of a team that is constantly innovating, you'll be helping to transform lives for generations.
Want to learn more? Visit https://www.boehringer-ingelheim.com
Machine Learning Engineer
28 de des.HappyRobot
Barcelona, ES
Machine Learning Engineer
HappyRobot · Barcelona, ES
. Python Docker Cloud Coumputing Kubernetes Machine Learning
About HappyRobot
HappyRobot is the AI-native operating system for the real economy—a system that closes the circuit between intelligence and action. By combining real-time truth, specialized AI workers, and an orchestrating intelligence, we help enterprises run complex, mission-critical operations with true autonomy.
Our AI OS compounds knowledge, optimizes at every level, and evolves over time. We’re starting with supply chain and industrial-scale operations, where resilience, speed, and continuous improvement matter most—freeing humans to focus on strategy, creativity, and other high-value tasks.
You can learn more about our vision in our Manifesto. HappyRobot has raised $62M to date, including our most recent $44M Series B in September 2025. Our investors include Y Combinator (YC), Andreessen Horowitz (a16z), and Base10—partners who believe in our mission to redefine how enterprises operate. We’re channeling this investment into building a world-class team: people with relentless drive, sharp problem-solving skills, and the passion to push limits in a fast-paced, high-intensity environment. If this resonates, you belong at HappyRobot.
About The Role
You’ll be building AI models that make human-like conversations possible. You’ll work at the intersection of speech, language, and intelligence, taking cutting-edge research and transforming it into real-time, scalable systems that power our core products. You’ll have the unique opportunity to make a huge impact as one of our first ML hires, shaping not only the technology but also the direction of our company. From designing robust models to deploying them in production, you’ll own the entire lifecycle of ML systems and help us stay ahead of the curve in AI innovation.
About The Role
- Design, build, and maintain scalable ML systems — from data ingestion and preprocessing to training, testing, and deployment.
- Develop and optimize end-to-end ML pipelines (data collection, labeling, training, validation, monitoring) to ensure reliability and reproducibility.
- Implement robust MLOps practices, including model versioning, experiment tracking, CI/CD for ML, and continuous monitoring in production.
- Collaborate with product and engineering teams to integrate and deploy models into real-time products with a focus on efficiency and scalability.
- Ensure data quality, observability, and performance across all AI systems.
- Stay current with the latest in AI infrastructure, tooling, and research — helping us stay ahead of the curve.
- Strong experience in machine learning, deep learning, and NLP.
- Solid background in MLOps and data pipelines — e.g., model deployment, monitoring, and scaling in production environments.
- Proficiency in Python and familiarity with Go.
- Experience with ML lifecycle management tools (e.g., MLflow, Kubeflow, Weights & Biases).
- Ability to design ML systems for robustness, scalability, and automation.
- Strong coding, debugging, and data engineering skills.
- Passion for AI infrastructure and its real-world impact.
- Founder mindset: ownership, independence, and willingness to go deep.
- Experience in speech recognition, TTS, or audio processing.
- Familiarity with LLMs, generative AI, or real-time inference systems.
- Hands-on experience with data orchestration frameworks (e.g., Airflow, Prefect, Dagster).
- Prior experience in startup environments with fast iteration cycles.
- Knowledge of cloud infrastructure (AWS/GCP/Azure) and containerization tools (Docker, Kubernetes).
- Opportunity to work at a high-growth AI startup, backed by top investors.
- Rapidly growing and backed by top investors including a16z, Y Combinator, and Base10.
- Ownership & Autonomy - Take full ownership of projects and ship fast.
- Top-Tier Compensation - Competitive salary + equity in a high-growth startup.
- Comprehensive Benefits - Healthcare, dental, vision coverage.
- Work With the Best - Join a world-class team of engineers and builders
Extreme Ownership
We take full responsibility for our work, outcomes, and team success. No excuses, no blame-shifting — if something needs fixing, we own it and make it better. This means stepping up, even when it’s not “your job.” If a ball is dropped, we pick it up. If a customer is unhappy, we fix it. If a process is broken, we redesign it. We don’t wait for someone else to solve it — we lead with accountability and expect the same from those around us.
Craftsmanship
Putting care and intention into every task, striving for excellence, and taking deep ownership of the quality and outcome of your work. Craftsmanship means never settling for “just fine.” We sweat the details because details compound. Whether it’s a product feature, an internal doc, or a sales call — we treat it as a reflection of our standards. We aim to deliver jaw-dropping customer experiences by being curious, meticulous, and proud of what we build — even when nobody’s watching.
We are “majos”
Be friendly & have fun with your coworkers. Always be genuine & honest, but kind. “Majo” is our way of saying: be a good human. Be approachable, helpful, and warm. We’re building something ambitious, and it’s easier (and more fun) when we enjoy the ride together. We give feedback with kindness, challenge each other with respect, and celebrate wins together without ego.
Urgency with Focus
Create the highest impact in the shortest amount of time. Move fast, but in the right direction. We operate with speed because time is our most limited resource. But speed without focus is chaos. We prioritize ruthlessly, act decisively, and stay aligned. We aim for high leverage: the biggest results from the simplest, smartest actions. We’re running a high-speed marathon — not a sprint with no strategy.
Talent Density and Meritocracy
Hire only people who can raise the average; ‘exceptional performance is the passing grade.’ Ability trumps seniority. We believe the best teams are built on talent density — every hire should raise the bar. We reward contribution, not titles or tenure. We give ownership to those who earn it, and we all hold each other to a high standard. A-players want to work with other A-players — that’s how we win.
First-Principles Thinking
Strip a problem to physics-level facts, ignore industry dogma, rebuild the solution from scratch. We don’t copy-paste solutions. We go back to basics, ask why things are the way they are, and rebuild from the ground up if needed. This mindset pushes us to innovate, challenge stale assumptions, and move faster than incumbents. It’s how we build what others think is impossible.
The personal data provided in your application and during the selection process will be processed by Happyrobot, Inc., acting as Data Controller.
By sending us your CV, you consent to the processing of your personal data for the purpose of evaluating and selecting you as a candidate for the position. Your personal data will be treated confidentially and will only be used for the recruitment process of the selected job offer.
In relation to the period of conservation of your personal data, these will be eliminated after three months of inactivity in compliance with the GDPR and legislation on the protection of personal data.
If you wish to exercise your rights of access, rectification, deletion, portability or opposition in relation to your personal data, you can do so through [email protected] subject to the GDPR.
For more information, visit https://www.happyrobot.ai/privacy-policy
By submitting your request, you confirm that you have read and understood this clause and that you agree to the processing of your personal data as described.
Data Engineer (Airflow)
23 de des.WIZELINE
Barcelona, ES
Data Engineer (Airflow)
WIZELINE · Barcelona, ES
Python Agile TSQL AWS
We are:
Wizeline, a global AI-native technology solutions provider, develops cutting-edge, AI-powered digital products and platforms. We partner with clients to leverage data and AI, accelerating market entry and driving business transformation. As a global community of innovators, we foster a culture of growth, collaboration, and impact.With the right people and the right ideas, there´s no limit to what we can achieve
Are you a fit?
Sounds awesome, right? Now, let´s make sure you´re a good fit for the role:
Key Responsibilities
- Data Migration and Pipeline Development
- Data Modeling and Transformation
- Troubleshooting and Optimization
- Collaboration and Documentation
Must-have Skills:
- Bachelor´s or Master´s degree in Computer Science, Engineering, or a related quantitative field.
- 3+ years of experience in data engineering, with a focus on building and maintaining scalable data pipelines.
- Solid experience with data migration projects and working with large datasets.
- Strong hands-on experience with Snowflake, including data loading, querying, and performance optimization.
- Proficiency in dbt (data build tool) for data transformation and modeling.
- Proven experience with Apache Airflow for scheduling and orchestrating data workflows.
- Expert-level SQL skills, including complex joins, window functions, and performance tuning.
- Proficiency in Python for data manipulation, scripting, and automation for edge cases
- Familiarity with PySpark, AWS Athena, and Google BigQuery (source systems).
- Understanding of data warehousing concepts, dimensional modeling, and ELT principles.
- Knowledge of building CI/CD pipelines for code deployment
- Experience with version control systems (e.g., Github).
- Excellent problem-solving, analytical, and communication skills.
- Ability to work independently and as part of a collaborative team in an agile environment.
- Must speak and write in English fluently; Effective communicator
Nice-to-have:
- AI Tooling Proficiency: Leverage one or more AI tools to optimize and augment day-to-day work, including drafting, analysis, research, or process automation. Provide recommendations on effective AI use and identify opportunities to streamline workflows.
What we offer:
- A High-Impact Environment
- Commitment to Professional Development
- Flexible and Collaborative Culture
- Global Opportunities
- Vibrant Community
- Total Rewards
*Specific benefits are determined by the employment type and location.