Eneco · Standplaats: Rotterdam · 20 oktober 2025
Senior Full-Stack Data Engineer
Eneco - Rotterdam
> 5 years
Digital & Tech, Data
Senior Full-Stack Data Engineer
Eneco - Rotterdam
Digital & Tech, Data
> 5 years
Lead the design and implementation of scalable data platforms, setting technical direction for pipelines, APIs, and production workflows.
Enable and mentor teammates and Data Scientists, ensuring Databricks, Snowflake, and DBT environments are production-ready and cost-efficient.
Drive engineering excellence through clean code, CI/CD, observability, and architectural decision-making that supports Eneco's long-term vision.
Why choose Eneco?
At Eneco, we're working hard to achieve our mission: sustainable energy for everyone. Learn more about how we're putting this into action in our One Planet Plan .
What you'll do
As a Full Stack Data Engineer, you will play a crucial role in our diverse team, solving real-world forecasting problems through cutting-edge ML models. Our product leverages a modern data stack end-to-end: from data ingestion into Snowflake, to transformations with DBT, to running forecasts in Databricks, and finally exposing results through Python APIs and aggregation services.
This product has high visibility and impact at Eneco, driving innovation in how we forecast, optimize, and deliver energy solutions to our consumers.
Is this about you?
Must Have:
Strong proficiency in SQL (experience with DBT and/or Airflow is preferred).
Solid experience writing clean, maintainable code (preferably in Python).
Hands-on experience with Databricks, specifically deployment and production use.
Strong knowledge of CI/CD pipelines and observability practices.
Experience building and maintaining REST APIs.
Nice to Have:
Experience with high-volume time series data.
Familiarity or interest in MLOps and data science techniques.
Experience deploying applications on Kubernetes. Helm is a plus.
Experience with data ingestion workflows (e.g., Snowpipe, Kafka, or similar).
Familiarity with cloud platforms (e.g., AWS or Azure). Infrastructure as Code (e.g. Terraform) is a plus.
You'll be responsible for
Designing and maintaining robust SQL-based data pipelines (leveraging DBT and/or Airflow) for both streaming and batch workloads.
Building and maintaining clean, production-quality Python code, including APIs and aggregation services.
Supporting Data Scientists by ensuring their Databricks environments and workflows are production-ready and scalable.
Applying CI/CD pipelines and observability practices to guarantee reliable and maintainable deployments.
Contributing to application deployments and operations, ensuring solutions run smoothly in production.
Influencing architectural decisions and mentoring teammates to raise engineering standards across the team.
Collaborating with product managers, data scientists, and engineers to deliver high-impact forecasting products.
This is where you'll work
You will join a cross-functional team of Data Engineers, Machine Learning Engineers, Data Scientists, and Analysts, all working together to deliver forecasting solutions with real business impact. Collaboration and knowledge-sharing are at the core of how we work: we encourage experimentation, celebrate successes, and learn quickly from setbacks.
Our engineering culture values clean, maintainable code, automation, and end-to-end ownership. You'll have the opportunity to shape data products from ingestion to deployment, contribute to technical decisions, and help ensure our solutions are reliable, scalable, and ready for production.
Together, we drive Eneco's mission to innovate and accelerate the energy transition.
What we have to offer
alt