No et perdis res!
Uneix-te a la comunitat de wijobs i rep per email les millors ofertes d'ocupació
Mai no compartirem el teu email amb ningú i no t'enviarem correu brossa
Subscriu-te araInformàtica i IT
182Comercial i Vendes
178Desenvolupament de Programari
101Transport i Logística
90Dret i Legal
79Veure més categories
Màrqueting i Negoci
78Educació i Formació
74Administració i Secretariat
65Comerç i Venda al Detall
51Disseny i Usabilitat
41Enginyeria i Mecànica
38Instal·lació i Manteniment
34Publicitat i Comunicació
25Producte
24Sanitat i Salut
24Atenció al client
21Immobiliària
20Construcció
19Recursos Humans
18Art, Moda i Disseny
17Hostaleria
16Indústria Manufacturera
15Comptabilitat i Finances
13Turisme i Entreteniment
8Arts i Oficis
7Energia i Mineria
5Farmacèutica
5Banca
2Cures i Serveis Personals
2Seguretat
2Alimentació
1Social i Voluntariat
1Agricultura
0Assegurances
0Ciència i Investigació
0Editorial i Mitjans
0Esport i Entrenament
0Telecomunicacions
0Top Zones
Barcelona
934papernest
Junior Data Engineer: Cloud & DevOps - Barcelona
papernest · Barcelona, ES
Teletreball . Python TSQL OOP Cloud Coumputing AWS DevOps Terraform
This year marks 10 years since we launched the idea that simplifying our customers' lives is possible by offering an innovative solution that allows them to easily subscribe to, manage, and switch all types of contracts through a unique and intuitive platform.
In that time, we have supported more than 2 millions customers in France, Spain, and Italy, while investing in new verticals and positioning ourselves as a highly efficient, innovative, and competitive scale-up in a rapidly growing market.
With over 900 employees across 3 locations, we are solidifying our position as a market leader in Europe. We are always on the lookout for talent ready to join a dedicated and motivated team driven by a meaningful project. Working with us means embracing a culture of excellence, innovation, and real impact.
We are looking for a Junior Data Engineer, with a Cloud & DevOps orientation. This role is for the engineer who loves the "Engine" part of Data Engineering. You will build the technical foundation that allows our data to flow. You will focus on the "how"—ensuring our infrastructure is automated, our CI/CD is fast, and our data platform is ready for the next generation of AI-driven automation.
Infrastructure as Code: Assist in evolving our stack (Python/Airflow/Docker) hosted on AWS.
DevOps for Data: Maintain and improve our CI/CD pipelines to ensure data deployments are seamless.
OOP Excellence: Build reusable Python modules that standardise how we handle data across the organization.
AI Enablement: Partner with the team to provide the infrastructure needed for AI/ML experimentation.
Ideally a strong school background in Software Development.
Tech: EXCELLENT Python (OOP) and SQL.
The Edge: A "Big Plus" for initial experience with AWS or Terraform.
Thrive in an international and inclusive environment: everyone has a place at papernest. With over 46 different nationalities, it’s not uncommon here to start a sentence in English and finish it en français or en español ¡
💸 Compensation: a plan for Subscription Warrants for Company Creators (BSPCE) in accordance with company regulations, as well as a Pluxee card to manage your tax level through a voluntary compensation system across different services (transportation, dining, and childcare).
🏆 Benefits: as a home insurance provider and a supplier of green electricity and gas, we offer attractive deals to our employees. After all, there’s no reason why things should only be simpler for our customers!
🩺 Health: medical insurance through Alan or Sanitas to manage your healthcare expenses in an ultra-simple, paperless way, with up to 50% coverage by papernest (after 6 months in the company).
🍽️ Meals & partnerships: a healthy breakfast offered every Tuesday, as well as partnerships with various services in Barcelona (restaurants, sports, leisure, and care centers).
📚 Training: the development of our employees is essential. You will have access to ongoing training tailored to your goals, whether it involves technical, language, or managerial skills.
📈 Career Development: numerous opportunities are available for you to grow, whether by deepening your expertise or exploring new paths. We support you in your professional ambitions.
✨ Remote Work: enjoy 2 days of remote work per week to optimize your focus and efficiency.
Hiring process:1st call with Talent Acquisition
Interview with a team member
Technical Case
Interview with Alex - Head of Data Engineering
Interested in this challenge? 🙂
Don’t hesitate any longer—we look forward to meeting you! Regardless of your age, gender, background, religion, sexual orientation, or disability, there’s a place for you with us. Our selection processes are designed to be inclusive, and our work environment is adapted to everyone’s needs.
We particularly encourage applications from women. Even if you feel that you don’t meet all the criteria outlined in this job posting, please know that every application is valuable. We strongly believe that diverse and varied backgrounds enrich our team, and we will carefully consider your application. Parity and diversity are essential assets to our success.
papernest
Junior Data Engineer: Data Flow & Architecture
papernest · Barcelona, ES
Teletreball . Python TSQL OOP SaaS
This year marks 10 years since we launched the idea that simplifying our customers' lives is possible by offering an innovative solution that allows them to easily subscribe to, manage, and switch all types of contracts through a unique and intuitive platform.
In that time, we have supported more than 2 millions customers in France, Spain, and Italy, while investing in new verticals and positioning ourselves as a highly efficient, innovative, and competitive scale-up in a rapidly growing market.
With over 900 employees across 3 locations, we are solidifying our position as a market leader in Europe. We are always on the lookout for talent ready to join a dedicated and motivated team driven by a meaningful project. Working with us means embracing a culture of excellence, innovation, and real impact.
As a Junior Data Engineer you will be the guardian of data quality and lineage. You’ll be assigned to a squad where the complexity of data flows is high. You won't just move data; you will design the logic that ensures our BigQuery data lake remains a "Single Source of Truth."
Advanced ETL/ELT: Design and implement data processing flows using Python and Airflow.
Data Lineage: Help develop tools that track data from source to destination, ensuring transparency for all users.
Reporting & Quality: Perform daily reporting on the health of customer and internal data flows.
Custom Tooling: Build internal Data Engineering tools to replace manual tasks—no SaaS "black boxes" here.
Engineering school background in Software Development.
Tech: Mastery of Python (OOP) and strong SQL (BigQuery is a plus).
The Edge: You love designing complex systems and have a high attention to detail regarding data integrity.
Thrive in an international and inclusive environment: everyone has a place at papernest. With over 46 different nationalities, it’s not uncommon here to start a sentence in English and finish it en français or en español ¡
💸 Compensation: a plan for Subscription Warrants for Company Creators (BSPCE) in accordance with company regulations, as well as a Pluxee card to manage your tax level through a voluntary compensation system across different services (transportation, dining, and childcare).
🏆 Benefits: as a home insurance provider and a supplier of green electricity and gas, we offer attractive deals to our employees. After all, there’s no reason why things should only be simpler for our customers!
🩺 Health: medical insurance through Alan or Sanitas to manage your healthcare expenses in an ultra-simple, paperless way, with up to 50% coverage by papernest (after 6 months in the company).
🍽️ Meals & partnerships: a healthy breakfast offered every Tuesday, as well as partnerships with various services in Barcelona (restaurants, sports, leisure, and care centers).
📚 Training: the development of our employees is essential. You will have access to ongoing training tailored to your goals, whether it involves technical, language, or managerial skills.
📈 Career Development: numerous opportunities are available for you to grow, whether by deepening your expertise or exploring new paths. We support you in your professional ambitions.
✨ Remote Work: enjoy 2 days of remote work per week to optimize your focus and efficiency.
Hiring process:1st call with Talent Acquisition
Interview with a team member
Technical Case
Interview with Alex - Head of Data Engineering
Interested in this challenge? 🙂
Don’t hesitate any longer—we look forward to meeting you! Regardless of your age, gender, background, religion, sexual orientation, or disability, there’s a place for you with us. Our selection processes are designed to be inclusive, and our work environment is adapted to everyone’s needs.
We particularly encourage applications from women. Even if you feel that you don’t meet all the criteria outlined in this job posting, please know that every application is valuable. We strongly believe that diverse and varied backgrounds enrich our team, and we will carefully consider your application. Parity and diversity are essential assets to our success.
Data Engineer
1 de gen.Viaplay Group
Barcelona, ES
Data Engineer
Viaplay Group · Barcelona, ES
. Java Python Azure Cloud Coumputing Scala AWS Terraform Kafka Spark Office
At Viaplay Group, we entertain millions of people every day through our streaming services, radio networks, and TV channels. We believe in the power of content not just as a way of telling stories and touching lives, but also expanding worlds.
We’re looking for the best people to join us on our journey. Right now, we’re searching for a Data Engineer in our Barcelona office – are you ready to hit play on an exciting career change?
The Role
Behind every great story is data. Data helps us understand our audiences, personalize their experiences, power content decisions, and ultimately deliver the entertainment they love.
Now, we're embarking on a major transformation: migrating to a modern, state-of-the-art data platform that will power the next generation of insights, personalization, and decision-making across our streaming services.
We’re looking for a data engineer to join as founding members of this platform team.
In this role, you’ll go beyond writing code: you’ll influence architecture, define engineering standards, and help foster a data-first culture. You’ll work with modern cloud-native technology, design self-service data products and build capabilities that enable teams across the organization to make smarter, faster decisions.
What You'll Build
- Design and implement scalable data pipelines - from scratch using cloud-native patterns on AWS
- Build data products – curated, self-service offerings with clear consumers, not just data dumps
- Shape platform architecture – contribute to decisions about tooling, patterns, and technical direction
- Establish engineering standards - for IaC, CI/CD, testing, monitoring, and documentation
- Enable self-service for data analysts, scientists, and product teams - empowering them with reliable, accessible data
- Own end-to-end solutions – from ingestion to delivery, across both batch and streaming systems
- Operate what you build – monitor, troubleshoot, and continuously improve production systems
Cloud: AWS
Processing: Databricks
IaC: Terraform
Languages: Python (Scala)
Orchestration: Airflow (and exploring modern alternatives)
We're evaluating new technologies as we build – you'll have a voice in these decisions.
What We’re Looking For
We understand you may feel confident ticking certain boxes more than others and that’s why we always keep an open mind in our recruitment process. But, in order to thrive in this role, we do believe you’ll have at least some experience in the following:
- Strong data engineering foundations – you understand data modeling, pipeline design, and data quality
- Solid programming skills - Python, Java, or Scala – you write clean, testable code
- Cloud experience (AWS, GCP, or Azure) – you're comfortable with distributed systems
- Spark or similar frameworks – you've processed data at scale
- Collaborative mindset – you enjoy working in a team and explaining technical concepts
- Curiosity and pragmatism – you want to learn new things but also ship working solutions
- Infrastructure as Code (Terraform, CloudFormation)
- Streaming/real-time data (Flink, Kafka, Kinesis)
- Platform engineering or building shared services
- Data product thinking or domain-driven design
- Open-source contributions or technical writing
- You've been part of a migration or greenfield platform project
Our HQ is in Stockholm, with our Tech hub based in Barcelona. You will become part of an amazing company with great people, content, and culture.
- An attractive offer with beneficial insurance, health care plan, meal vouchers, 30 days of paid vacation, flexible working hours, a hybrid model, flexible remuneration, and more!
- A safe space to grow and up-skill. Our learning culture puts you in the driver’s seat of your own development.
- An innovative environment with Hack Days once a year. This week-long initiative allows you to think outside the box and deliver creative, technical solutions that (more often than not) go on to be implemented, either in our product or our ways of working.
- Entertainment is what we love, and entertainment is what we do. So, unlimited access to Viaplay seems only fair for you to get to know the product – including serier & viewing events, new release movie rentals, linear channels and more.
If this feels like your kind of challenge, make sure you apply by attaching your CV here – you may also want to add your LinkedIn profile. Please don’t send us your application via email because we won’t be able to accept it. We do, however, welcome any questions you may have about this particular position.
Want to learn more about who we are and what we do? Check out our careers page or follow us on Instagram! We’re only ever a few clicks away.
Senior Data Engineer (Data Operations Team)
29 de des.Semrush
Barcelona, ES
Senior Data Engineer (Data Operations Team)
Semrush · Barcelona, ES
. Python TSQL Docker Cloud Coumputing Kubernetes REST SaaS Terraform Office
Hi there!
We are Semrush, a global Tech company developing our own product – a platform for digital marketers.
Are you ready to be a part of it? This is your chance! We’re hiring for Senior Data Engineer (Data Operations Team).
Tasks in the role
General Overview
- Our data ecosystem is built on self-hosted Airflow & dbt Core, along with multiple BigQuery instances.
- The current setup was built several years ago and has become highly customized.
- While this customization supports flexibility, it now limits development speed and reduces analytics efficiency.
- We’re looking for a highly technical expert who can redesign, simplify, and standardize our DWH infrastructure.
- The focus is more on stabilizing and improving the system rather than pure feature development.
- Identify and carefully resolve infrastructure inefficiencies
- Conduct audits of existing infrastructure and propose improvements
- Oversee infrastructure health, performance, and cost efficiency
- Evaluate architecture proposals from peers and provide feedback
- Make key architectural proposals
- Develop and deploy IaC using Terraform
- Create and maintain CI/CD pipelines in GitLab
- Design, build, and optimize data pipelines using BigQuery, Airflow & dbt
- Monitor and troubleshoot cloud infrastructure, pipelines, and workflows Support the development and maintenance of ML/AI tools and workflows
- Conduct code reviews for merge requests
Hard Skills
- Proficient in System Design & Architecture
- Strong expertise in Airflow management
- Strong expertise in dbt management
- Proficient in IaC tools (Terraform)
- Proficient in CI/CD tools (GitLab)
- Advanced knowledge of SQL
- Proficient in Python
- Experienced in Monitoring & Alerting (Grafana)
- Experience with Containers (Docker, Kubernetes)
- Strong project management skills across the full delivery lifecycle: from requirement gathering and decomposition to roadmapping, prioritization, execution, and delivery
- Proactive and autonomous, able to make efficient decisions with minimal supervision
- Strategic and structured thinker
- Excellent problem-solving skills and attention to detail
- Strong communication and stakeholder management skills, with ability to build reliable partnerships
- Flexible working hours
- Unlimited PTO
- Flexi Benefit for your hobby
- Employee Support Program
- Loss of family member financial aid
- Employee Resource Groups
- Meals, snacks, and drinks at the office
- Corporate events
- Teambuilding
- Training, courses, conferences
Semrush is a leading online visibility management SaaS platform that enables businesses globally to run search engine optimization, pay-per-click, content, social media and competitive research campaigns and get measurable results from online marketing.
We've been developing our product for 17 years and have been awarded G2's Top 100 Software Products, Global and US Search Awards 2021, Great Place to Work Certification, Deloitte Technology Fast 500 and many more. In March 2021 Semrush went public and started trading on the NYSE with the SEMR ticker.
10,000,000+ users in America, Europe, Asia, and Australia have already tried Semrush, and over 1,700 people around the world are working on its development. The Semrush team is constantly growing.
Our Diversity, Equity, and Inclusion commitments
Semrush is an equal opportunity employer. Building a better future for marketers around the world unites people from all backgrounds. Even if you feel that you don’t 100% match all requirements, don’t be discouraged to apply! We are committed to ensure that everyone feels a sense of belonging in the workplace.
We do not discriminate based upon race, religion, creed, color, national origin, sex, pregnancy, sexual orientation, gender identity, gender expression, age, ancestry, physical or mental disability, or medical condition including medical characteristics, genetic identity, marital status, military service, or any other classification protected by applicable local, state or federal laws.
Our new colleague, we are waiting for you!
Anastasiia Bruk
Talent Acquisition Partner
Canonical
Embedded & Desktop Linux Systems Engineer - Optimisation
Canonical · Barcelona, ES
Teletreball . Linux C++ Cloud Coumputing IoT
Work across the full Linux stack from kernel through GUI to optimise Ubuntu, the world's most widely used Linux desktop and server, for the latest silicon.
The role is a fast-paced, problem-solving role that's challenging yet very exciting. The right candidate must be resourceful, articulate, and able to deliver on a wide variety of solutions across PC and IoT technologies. Our teams partner with specialist engineers from major silicon companies to integrate next-generation features and performance enhancements for upcoming hardware.
Location: This is a Globally remote role
What your day will look like
- Design and implement the best Ubuntu integration for the latest IoT and server-class hardware platforms and software stacks
- Work with partners to deliver a delightful, optimised, first class Ubuntu experience on their platforms
- Take a holistic approach to the Ubuntu experience on partner platforms with inputs on technical plans, testing strategy, quality metrics
- Participate as technical lead on complex customer engagements involving complete system architectures from cloud to edge
- Help our customers integrate their apps, SDKs, build device OS images, optimize applications with Ubuntu Core, Desktop and Server
- Work with the most advanced operating systems and application technologies available in the enterprise world.
What we are looking for in you
- You love technology and working with brilliant people
- You have a Bachelor's degree in Computer Science, STEM or similar
- You have experience with Linux packaging (Debian, RPM, Yocto)
- You have experience working with open source communities and licences
- You have experience working with C, C++
- You can work in a globally distributed team through self-discipline and self-motivation.
- Experience with graphics stacks
- Good understanding of networking - TCP/IP, DHCP, HTTP/REST
- Basic understanding of security best practices in IoT or server environments
- Good communication skills, ideally public speaking experience
- IoT / Embedded experience – from board and SoC, BMCs, bootloaders and firmware to OS, through apps and services
- Some experience with Docker/OCI containers/K8s
Your base pay will depend on various factors including your geographical location, level of experience, knowledge and skills. In addition to the benefits above, certain roles are also eligible for additional benefits and rewards including annual bonuses and sales incentives based on revenue or utilisation. Our compensation philosophy is to ensure equity right across our global workforce.
In addition to a competitive base pay, we provide all team members with additional benefits, which reflect our values and ideals. Please note that additional benefits may apply depending on the work location and, for more information on these, you can ask in the later stages of the recruitment process.
- Fully remote working environment - we've been working remotely since 2004!
- Personal learning and development budget of 2,000USD per annum
- Annual compensation review
- Recognition rewards
- Annual holiday leave
- Parental Leave
- Employee Assistance Programme
- Opportunity to travel to new locations to meet colleagues at 'sprints'
- Priority Pass for travel and travel upgrades for long haul company events
Canonical is a pioneering tech firm that is at the forefront of the global move to open source. As the company that publishes Ubuntu, one of the most important open source projects and the platform for AI, IoT and the cloud, we are changing the world on a daily basis. We recruit on a global basis and set a very high standard for people joining the company. We expect excellence - in order to succeed, we need to be the best at what we do.
Canonical has been a remote-first company since its inception in 2004. Work at Canonical is a step into the future, and will challenge you to think differently, work smarter, learn new skills, and raise your game. Canonical provides a unique window into the world of 21st-century digital business.
Canonical is an equal opportunity employer
We are proud to foster a workplace free from discrimination. Diversity of experience, perspectives, and background create a better work environment and better products. Whatever your identity, we will give your application fair consideration.
Amazon Web Services (AWS)
Barcelona, ES
2026 Business Intelligence Engineer Internship
Amazon Web Services (AWS) · Barcelona, ES
. TSQL AWS Excel Tableau
Description
- This is a 6 month Internship -
Business Intelligence Engineer 2026 - Spain
We’re on the lookout for the curious, those who think big and want to define the world of tomorrow. At Amazon, you will grow into the high impact, visionary person you know you’re ready to be. Every day will be filled with exciting new challenges, developing new skills, and achieving personal growth.
How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow
2026 Business Intelligence Engineer Internship - Spain
Do you enjoy solving complex problems and troubleshooting products? Are you passionate about developing test strategies, finding, and tracking bugs to resolution, and innovating on behalf of customers? Do you want to be a part of a fast-paced, ambiguous environment and contribute to one of the most visited sites on the Internet?
At Amazon, we hire the best minds in technology to innovate on behalf of our customers. The intense focus we have on our customers is why we are one of the world’s most beloved brands – customer obsession is part of our company DNA. Business intelligence engineers use cutting-edge technology to solve complex problems and get to see the impact of their work first-hand.
The challenges business intelligence engineers solve for at Amazon are big and affect millions of customers, sellers, and products around the world. Our path is not always simple, so we are selective about who joins us on this journey. There is a certain kind of person who takes on this role at Amazon – someone who is excited by the idea of creating new products, features, and services from scratch while managing ambiguity and the pace of a company whose ship cycles are measured in weeks, not years.
The Amazon University Talent Acquisition Team are looking for ambitious students to join us as interns at the heart of our core consumer business! Internships are flexible in length to fit in with your university’s placement scheme.
Key job responsibilities
- Develop analytical solutions to business problems that utilize the highest standards of analytical rigor and data integrity
- Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation
- Write high quality code to retrieve and analyze data
- Analyze and solve business problems at their root, stepping back to understand the broader context
- Design pragmatic analyses and automated metrics that add value to your business area
- Understand data resources and how, when, and what to use (and what not to use).
- Develop analyses (whether fully formed or exploratory) for the business’ sake, not for analyses’ sake
- Seek to understand the business objectives relevant to your area, and align your work to those objectives and seek to deliver business value
- Proactively and continually, improve your level of knowledge about Amazon’s business and relevant data resources
Our Business Intelligence Engineer builds data pipelines, reports, dashboards, and analyses to deliver metrics and insights to the business.
Our Business Intelligence Engineers tackle some of the most complex challenges in large scale
computing, work in small teams across the company to contribute to the e-commerce platform that's used by millions of people all over the world. With that in mind, we require applicants to demonstrate their technical skills in a number of areas.
About The Team
If you’re insatiably curious and always want to learn more, then you’ve come to the right place. Depending on your location, country, job status and other requirements, some or all of the following benefits may be available to you as an intern.
- Competitive pay
- Impactful project and internship/role deliverables
- Networking opportunities with fellow interns
- Internships events such as speaker series, intern panels, Leadership Principles sessions, Amazon writing skills sessions.
- Mentorship and career development
Internship start dates vary throughout the year.
Internship length is ideally 6 months.
Basic Qualifications
- Are enrolled in or have completed a Bachelor's degree in computer science, computer engineering, or related field
- Are enrolled in a Bachelor's degree or above in Computer Science, Computer Engineering, or related fields
- Work 40 hours/week minimum and commit to 6 month internship maximum
- Are enrolled in or have completed a Master's degree in computer science, computer engineering, or related field
- Knowledge of BI analytics, reporting or visualization tools like Tableau, AWS QuickSight, Cognos or other third-party tools
- Experience with data querying or modeling with SQL or Excel
- Experience with SQL
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Company - AWS EMEA SARL (Spain Branch) - G84
Job ID: A3126661
Data Engineer
29 de des.Boehringer Ingelheim
Sant Cugat del Vallès, ES
Data Engineer
Boehringer Ingelheim · Sant Cugat del Vallès, ES
. Python TSQL NoSQL Cloud Coumputing Scala Hadoop AWS Kafka Spark Big Data Power BI Tableau
We’re looking for a Data Engineer to evolve our data infrastructure, optimize data flows, and guarantee data availability and quality. You will partner closely with data scientists and analysts to keep a consistent, scalable data delivery architecture across all ongoing projects.
Responsibilities
- Design, build, install, test, and maintain highly scalable data management systems
- Ensure solutions meet business requirements and industry best practices
- Integrate/re-engineer emerging data-management and software-engineering technologies into existing data stacks
- Define and document standardized processes for data mining, data modeling, and data production
- Use a variety of languages and tools to stitch systems together (e.g., Python, SQL)
- Recommend improvements to increase data reliability, efficiency, and quality
- Collaborate with data architects, modelers, and IT teams to align on project goals
- Bachelor’s/Master’s degree in Computer Science, Engineering, or related field or equivalent proven experience as a Data Engineer, Software Developer, or similar role
- Proficiency with the Apache ecosystem (Parquet, Hadoop, Spark, Kafka, Airflow)
- Strong hands-on experience with AWS data services (Amazon Redshift, Kinesis, Glue, S3)
- Demonstrated experience building and optimizing big-data pipelines, architectures, and datasets
- Strong analytical skills working with unstructured datasets
- Experience with relational SQL and NoSQL databases, preferably Snowflake and/or Databricks
- Familiarity with data pipeline and workflow orchestration tools
- Strong project management and organizational skills
- Excellent written and verbal communication skills
- Snaplogic knowledge is a plus
- Proficiency in scripting languages such as Python or Scala
- Familiarity with data visualization tools (e.g., Tableau, Power BI, QuickSight)
- AWS Cloud Practitioner, Architecture, Big Data
With us, you can grow, collaborate, innovate, and improve lives. We offer challenges in a global, respectful, and family-like work environment where ideas drive our innovative mindset. Flexible learning and continuous development for our team are key because your growth is our growth.
At Boehringer Ingelheim, gender equality is one of our top priorities. We not only comply with current regulations but also strive to promote it in all areas of our organization, as established in our III Equality Plan. We are committed to creating an inclusive and equitable work environment for everyone!
Our Company
Why Boehringer Ingelheim?
With us, you can develop your own path in a company with a culture that knows our differences are our strengths - and break new ground in the drive to make millions of lives better.
Here, your development is our priority. Supporting you to build a career as part of a workplace that is independent, authentic and bold, while tackling challenging work in a respectful and friendly environment where everyone is valued and welcomed.
Alongside, you have access to programs and groups that ensure your health and wellbeing are looked after - as we make major investments to drive global accessibility to healthcare. By being part of a team that is constantly innovating, you'll be helping to transform lives for generations.
Want to learn more? Visit https://www.boehringer-ingelheim.com
Machine Learning Engineer
28 de des.HappyRobot
Barcelona, ES
Machine Learning Engineer
HappyRobot · Barcelona, ES
. Python Docker Cloud Coumputing Kubernetes Machine Learning
About HappyRobot
HappyRobot is the AI-native operating system for the real economy—a system that closes the circuit between intelligence and action. By combining real-time truth, specialized AI workers, and an orchestrating intelligence, we help enterprises run complex, mission-critical operations with true autonomy.
Our AI OS compounds knowledge, optimizes at every level, and evolves over time. We’re starting with supply chain and industrial-scale operations, where resilience, speed, and continuous improvement matter most—freeing humans to focus on strategy, creativity, and other high-value tasks.
You can learn more about our vision in our Manifesto. HappyRobot has raised $62M to date, including our most recent $44M Series B in September 2025. Our investors include Y Combinator (YC), Andreessen Horowitz (a16z), and Base10—partners who believe in our mission to redefine how enterprises operate. We’re channeling this investment into building a world-class team: people with relentless drive, sharp problem-solving skills, and the passion to push limits in a fast-paced, high-intensity environment. If this resonates, you belong at HappyRobot.
About The Role
You’ll be building AI models that make human-like conversations possible. You’ll work at the intersection of speech, language, and intelligence, taking cutting-edge research and transforming it into real-time, scalable systems that power our core products. You’ll have the unique opportunity to make a huge impact as one of our first ML hires, shaping not only the technology but also the direction of our company. From designing robust models to deploying them in production, you’ll own the entire lifecycle of ML systems and help us stay ahead of the curve in AI innovation.
About The Role
- Design, build, and maintain scalable ML systems — from data ingestion and preprocessing to training, testing, and deployment.
- Develop and optimize end-to-end ML pipelines (data collection, labeling, training, validation, monitoring) to ensure reliability and reproducibility.
- Implement robust MLOps practices, including model versioning, experiment tracking, CI/CD for ML, and continuous monitoring in production.
- Collaborate with product and engineering teams to integrate and deploy models into real-time products with a focus on efficiency and scalability.
- Ensure data quality, observability, and performance across all AI systems.
- Stay current with the latest in AI infrastructure, tooling, and research — helping us stay ahead of the curve.
- Strong experience in machine learning, deep learning, and NLP.
- Solid background in MLOps and data pipelines — e.g., model deployment, monitoring, and scaling in production environments.
- Proficiency in Python and familiarity with Go.
- Experience with ML lifecycle management tools (e.g., MLflow, Kubeflow, Weights & Biases).
- Ability to design ML systems for robustness, scalability, and automation.
- Strong coding, debugging, and data engineering skills.
- Passion for AI infrastructure and its real-world impact.
- Founder mindset: ownership, independence, and willingness to go deep.
- Experience in speech recognition, TTS, or audio processing.
- Familiarity with LLMs, generative AI, or real-time inference systems.
- Hands-on experience with data orchestration frameworks (e.g., Airflow, Prefect, Dagster).
- Prior experience in startup environments with fast iteration cycles.
- Knowledge of cloud infrastructure (AWS/GCP/Azure) and containerization tools (Docker, Kubernetes).
- Opportunity to work at a high-growth AI startup, backed by top investors.
- Rapidly growing and backed by top investors including a16z, Y Combinator, and Base10.
- Ownership & Autonomy - Take full ownership of projects and ship fast.
- Top-Tier Compensation - Competitive salary + equity in a high-growth startup.
- Comprehensive Benefits - Healthcare, dental, vision coverage.
- Work With the Best - Join a world-class team of engineers and builders
Extreme Ownership
We take full responsibility for our work, outcomes, and team success. No excuses, no blame-shifting — if something needs fixing, we own it and make it better. This means stepping up, even when it’s not “your job.” If a ball is dropped, we pick it up. If a customer is unhappy, we fix it. If a process is broken, we redesign it. We don’t wait for someone else to solve it — we lead with accountability and expect the same from those around us.
Craftsmanship
Putting care and intention into every task, striving for excellence, and taking deep ownership of the quality and outcome of your work. Craftsmanship means never settling for “just fine.” We sweat the details because details compound. Whether it’s a product feature, an internal doc, or a sales call — we treat it as a reflection of our standards. We aim to deliver jaw-dropping customer experiences by being curious, meticulous, and proud of what we build — even when nobody’s watching.
We are “majos”
Be friendly & have fun with your coworkers. Always be genuine & honest, but kind. “Majo” is our way of saying: be a good human. Be approachable, helpful, and warm. We’re building something ambitious, and it’s easier (and more fun) when we enjoy the ride together. We give feedback with kindness, challenge each other with respect, and celebrate wins together without ego.
Urgency with Focus
Create the highest impact in the shortest amount of time. Move fast, but in the right direction. We operate with speed because time is our most limited resource. But speed without focus is chaos. We prioritize ruthlessly, act decisively, and stay aligned. We aim for high leverage: the biggest results from the simplest, smartest actions. We’re running a high-speed marathon — not a sprint with no strategy.
Talent Density and Meritocracy
Hire only people who can raise the average; ‘exceptional performance is the passing grade.’ Ability trumps seniority. We believe the best teams are built on talent density — every hire should raise the bar. We reward contribution, not titles or tenure. We give ownership to those who earn it, and we all hold each other to a high standard. A-players want to work with other A-players — that’s how we win.
First-Principles Thinking
Strip a problem to physics-level facts, ignore industry dogma, rebuild the solution from scratch. We don’t copy-paste solutions. We go back to basics, ask why things are the way they are, and rebuild from the ground up if needed. This mindset pushes us to innovate, challenge stale assumptions, and move faster than incumbents. It’s how we build what others think is impossible.
The personal data provided in your application and during the selection process will be processed by Happyrobot, Inc., acting as Data Controller.
By sending us your CV, you consent to the processing of your personal data for the purpose of evaluating and selecting you as a candidate for the position. Your personal data will be treated confidentially and will only be used for the recruitment process of the selected job offer.
In relation to the period of conservation of your personal data, these will be eliminated after three months of inactivity in compliance with the GDPR and legislation on the protection of personal data.
If you wish to exercise your rights of access, rectification, deletion, portability or opposition in relation to your personal data, you can do so through [email protected] subject to the GDPR.
For more information, visit https://www.happyrobot.ai/privacy-policy
By submitting your request, you confirm that you have read and understood this clause and that you agree to the processing of your personal data as described.
Data Engineer (Airflow)
23 de des.WIZELINE
Barcelona, ES
Data Engineer (Airflow)
WIZELINE · Barcelona, ES
Python Agile TSQL AWS
We are:
Wizeline, a global AI-native technology solutions provider, develops cutting-edge, AI-powered digital products and platforms. We partner with clients to leverage data and AI, accelerating market entry and driving business transformation. As a global community of innovators, we foster a culture of growth, collaboration, and impact.With the right people and the right ideas, there´s no limit to what we can achieve
Are you a fit?
Sounds awesome, right? Now, let´s make sure you´re a good fit for the role:
Key Responsibilities
- Data Migration and Pipeline Development
- Data Modeling and Transformation
- Troubleshooting and Optimization
- Collaboration and Documentation
Must-have Skills:
- Bachelor´s or Master´s degree in Computer Science, Engineering, or a related quantitative field.
- 3+ years of experience in data engineering, with a focus on building and maintaining scalable data pipelines.
- Solid experience with data migration projects and working with large datasets.
- Strong hands-on experience with Snowflake, including data loading, querying, and performance optimization.
- Proficiency in dbt (data build tool) for data transformation and modeling.
- Proven experience with Apache Airflow for scheduling and orchestrating data workflows.
- Expert-level SQL skills, including complex joins, window functions, and performance tuning.
- Proficiency in Python for data manipulation, scripting, and automation for edge cases
- Familiarity with PySpark, AWS Athena, and Google BigQuery (source systems).
- Understanding of data warehousing concepts, dimensional modeling, and ELT principles.
- Knowledge of building CI/CD pipelines for code deployment
- Experience with version control systems (e.g., Github).
- Excellent problem-solving, analytical, and communication skills.
- Ability to work independently and as part of a collaborative team in an agile environment.
- Must speak and write in English fluently; Effective communicator
Nice-to-have:
- AI Tooling Proficiency: Leverage one or more AI tools to optimize and augment day-to-day work, including drafting, analysis, research, or process automation. Provide recommendations on effective AI use and identify opportunities to streamline workflows.
What we offer:
- A High-Impact Environment
- Commitment to Professional Development
- Flexible and Collaborative Culture
- Global Opportunities
- Vibrant Community
- Total Rewards
*Specific benefits are determined by the employment type and location.