Enhance data team with architectural best practices and low-level optimizations.
Support on evolving data integration pipelines (Debezium, Kafka, dlt), data modelling (dbt), database engines (Snowflake), ML Ops (Airflow, MLflow), BI reporting (Metabase, Observable, Text-2-SQL), reverse ETL syncs (Census).
Cover up business units with feature requests / bugfixes / data quality issues.
Enforce code quality, automated testing and code style.
Requirements
3+ years of experience in Data Infrastructure Engineer / Data Engineer / MLOps Engineer roles.
Work experience or troubleshooting experience in the following areas.
Data Pipelines: deployment, configuration, monitoring (Kafka, Airflow or similar).
Data Modeling: DRY and structured approach, applying performance tuning techniques.
Containerizing applications and code: Docker, K8s.
Fluent with SQL and Python.
At least Intermediate level of English.
Experience in researching and integrating open-source technologies (data ingestion, data modelling, BI reporting, LLM applications, etc.).
Ability to identify performance bottlenecks.
Team work: GitOps, Continuous Integration, code reviews.
Technical university graduate.
What we offer
Wheely expects the very best from our people, both on the road and in the office. In return, employees enjoy flexible working hours, stock options and an exceptional range of perks and benefits.