Middle/Senior Big Data Engineer (Big Data Competence Center)

Remote Big Data Data Science Engineering

Required skills

Python / good
Java / good
Scala / good
AWS / good

Are you looking for a team of professionals on the top of cutting-edge technologies?

Are you looking for a place to boost your Big Data career?

We are inviting you to join our Big Data Competence Center, which is the part of Sigma Software’s complex organizational structure that combines various clients, interesting projects and activities to grow your professional skills.

Project

Big Data Competence Center – is a place where we collect the best engineering practices and unite them into one knowledge base to provide the best technical excellence services for our clients.

We’re not just a team gathered to write a code. We all are willing to contribute to the field either by participating in the life of Big Data Unit, or by constantly growing our own skills.

In our unit, you can be a Big Data Engineer/Team Lead/Architect, or you can become a mentor, or a person behind all the new technologies in the team, or an active listener if you will. Whatever you decide, be sure: you’re not alone. Whether it’s a difficult task, unordinary request from the client, or your next project to choose – you’ll always have a mentor & teammates with a likewise mindset to come up with the best solution.

We’re using an unusual approach of hiring people not for the specific project, but to our team in the first place. It gives us a chance to know you better and ensure that we’ll provide the perfect match between the client’s needs & your professional interests.

If you have the courage to become a part of the top-notch Big Data community in Ukraine, then grab your chance and let’s make some history together!

Requirements

  • 4+ years’ development experience with Java, Scala, or Python
  • Solid knowledge in algorithms and data structures
  • Experience with AWS — Glue, Lambda, Step Functions, S3, etc.
  • Experience with developing data pipelines based on one of the mainstream frameworks like Spark/Flink/Presto, etc.
  • Experience with developing data lakes and data warehouses based on one of the mainstream technologies like Hive/Snowflake/OLAP/Hudi, etc.
  • Knowledge in SQL and solid experience with NoSQL or RDBMS technologies
  • Experience with data integrations and different data formats CSV, Protobuf, Parquet, Avro, ORC, etc.
  • Solid understanding of technical excellence and hands-on experience with code reviews, test development, and CI/CD

 

Would be a plus:

  • Experience with building data platform using cloud provider services (GCP or Azure)
  • Experience with developing Snowflake-driven data warehouses
  • Experience with developing event-driven data pipelines

Responsibilities

  • Contributing to new technologies investigations and complex solutions design, supporting a culture of innovation considering matters of security, scalability, and reliability
  • Working with modern data stack, coming up with well-designed technical solutions and robust code, implement data governance processes
  • Working and professionally communicating with customer’s team
  • Taking up responsibility for delivering major solution features
  • Participating in requirements gathering & clarification process, proposing optimal architecture strategies, lead the data architecture implementation
  • Developing core modules and functions, designing scalable and cost-effective solutions
  • Performing code reviews, writing unit, and integration tests
  • Scaling the distributed system and infrastructure to the next level
  • Building data platform using power of modern cloud providers (AWS/GCP/Azure)

Extra Responsibilities:

  • Developing the Micro Batch/Real-Time streaming pipelines (Lambda architecture)
  • Working on POCs for validating proposed solutions and migrations
  • Leading the migration to modern technology platform, providing technical guidance
  • Adhere to CI/CD methods, helping to implement best practices in the team
  • Contributing to unit growth, mentoring other members in the team (optional)
  • Owning the whole pipeline and optimizing the engineering processes
  • Designing complex ETL processes for analytics and data management, driving the massive implementation

WHY US

  • Diversity of Domains & Businesses
  • Variety of technology
  • Health & Legal support
  • Active professional community
  • Continuous education and growing
  • Flexible schedule
  • Remote work
  • Outstanding offices (if you choose it)
  • Sports and community activities

Share this vacancy

apply now

apply now


    OR

    Drop your CV here, or
    Supports: DOC, DOCX, PDF, max size 5 Mb