Middle/Senior Big Data Engineer (Big Data Competence Center)

Remote (Ukraine only) Big Data Engineering

Required skills

Python / good
Java / good
Scala / good
AWS / good
English / strong

Are you looking for a team of professionals on the top of cutting-edge technologies?

Are you on the lookout for a place to boost your Big Data career?

We are inviting you to join our Big Data Competence Center, which is a part of Sigma Software’s complex organizational structure that combines various clients, interesting projects and activities to grow your professional skills.

Project

Big Data Competence Center – is a place where we collect the best engineering practices and uniting them into one knowledge base to provide the best technical excellence services for our clients.

We are not just a team gathered to write a code. We all are willing to contribute to the field ether by participating in life of Big Data Unit, or by constantly growing our own skills. In our unit, you can be a Big Data engineer/team lead/architect, or you can become a mentor or a person behind all the new technologies in the team, or an active listener if you will. Whatever you decide, be sure: you’re not alone. Whether it is a difficult task, unordinary request from the client, or your next project to choose – you will always have a mentor and teammates with likewise mindset to come up with best solution.

We are using an unusual approach of hiring people not for the specific project, but to our team in the first place. It gives us a chance to know you better and ensure that we will provide the perfect match between the client’s needs and your professional interests.

If you have the courage to become the part of the top-notch Big Data community in Ukraine, then grab your chance and let’s make some history together!

Requirements

  • 4+ years development experience with Java, Scala, or Python
  • Solid knowledge of algorithms and data structures
  • Experience with AWS — Glue, Lambda, Step Functions, S3, etc.
  • Experience developing data pipelines based on one of the mainstream frameworks like Spark/Flink/Presto, etc.
  • Experience with developing data lakes and data warehouses based on one of the mainstream technologies like Hive/Snowflake/OLAP/Hudi, etc.
  • Knowledge in SQL and solid experience with NoSQL or RDBMS technologies
  • Experience with data integrations and different data formats like CSV, Protobuf, Parquet, Avro, ORC, etc.
  • Solid understanding of technical excellence and hands-on experience with code reviews, test development, and CI/CD
  • Upper-Intermediate level of English

 

Would be a plus:

  • Experience with Azure and GCP
  • Experience developing Snowflake-driven data warehouses
  • Experience developing Kinesis-driven streaming data pipelines

Responsibilities

  • Contributing to new technologies investigations and complex solutions design, supporting a culture of innovation considering matters of security, scalability, and reliability
  • Working with modern data stack, coming up with well-designed technical solutions and robust code, implementing data governance processes
  • Working and professionally communicating with customer’s team
  • Taking up responsibility for delivering major solution features
  • Participating in requirements gathering & clarification process, proposing optimal architecture strategies, leading the data architecture implementation
  • Developing core modules and functions, designing scalable and cost-effective solutions
  • Performing code reviews, writing unit and integration tests
  • Scaling the distributed system and infrastructure to the next level
  • Building data platform using power of modern cloud providers (AWS/GCP/Azure)

 

These responsibilities will help you grow professionally and can vary depending on the project and the desire to extend your role in the company:

  • Developing the Micro Batch/Real-Time streaming pipelines (Lambda architecture)
  • Working on POCs for validating proposed solutions and migrations
  • Leading the migration to modern technology platform, providing technical guidance
  • Adhering to CI/CD methods, helping to implement best practices in the team
  • Contributing to unit growth, mentoring other members in the team (optional)
  • Owning the whole pipeline and optimizing the engineering processes
  • Designing complex ETL processes for analytics and data management, driving the massive implementation

WHY US

  • Diversity of Domains & Businesses
  • Variety of technology
  • Health & Legal support
  • Active professional community
  • Continuous education and growing
  • Flexible schedule
  • Remote work
  • Outstanding offices (if you choose it)
  • Sports and community activities

Share this vacancy

apply now

apply now

    Drop your CV here, or
    Supports: DOC, DOCX, PDF, max size 5 Mb