Middle Big Data Developer (Automotive Domain)

  • Remote (Ukraine only)
  • Big Data

This position is a perfect match for an enthusiastic Big Data Engineer who would like to develop a Data Processing platform.

As a Big Data Engineer, you will design and develop a robust and scalable solution for data versioning and distribution, orchestration of data merge & conflict handling processes, as well as for providing Data Dashboards and analytical reports to end-users.

You are going to be a part of the distributed team that use agile methodologies best practices, Continuous Integration, performance testing, capacity planning, and documentation.


Our customer is a company with more than 20 years’ experience in setting standards in the automotive aftermarket worldwide as a digital innovator and a provider of leading expert solutions.

The client is distinguished by in-depth knowledge of the industry and requirement for the highest levels of quality and security based on sustainable technologies.

The primary company goal is to support the client during the digital transformation with data-driven solutions and comprehensive consulting services for effective and efficient business processes and practical digital solutions and innovative services implementation to make day-to-day business easier.


You will work with the Data Warehouse solution designed to become a unified storage tool for structured and unstructured data generated by the various solutions. The solution is a data processing tool aimed at collecting the data and sending it to the Data Distribution Team.

Meet your team!
  • Responsibilities

    • Building up an infrastructure for data distribution platform with the level of customization and scalability that allows applying quick changes and creating Proof-of-Concept prototypes
    • Designing and developing data pipelines
    • Creating and customizing ETL scripts with AWS serverless technologies and frameworks
    • Living by agile principles and collaborating with team members using agile techniques
  • Requirements

    • At least 3 years of experience in Python development
    • Experience with Apache Spark at least 2 years
    • At least 1 year experience of Kafka
    • Good knowledge of designing DBs
    • Good experience with AWS Serverless (Glue, Lambda, Step Functions, S3, SNS/SQS)
    • Result-oriented standpoint
    • Ability to perform in self-manageable manner within set time-boxing
    • At least Intermediate level of English

      Nice to have:

    • Experience with Apache Hudi
    • Experience with serverless.com