Senior Data Engineer (Communication Management Solution)

Bulgaria Czech Republic Europe Poland Portugal Remote (Ukraine) Romania Ukraine Data Science Engineering

Required skills

AWS tools / strong
Python / strong
Kafka / good
Spark / good
English / strong

Do you want to be a part of a talented team caring about their products and striving to make them the best on the market? If so, keep reading!   

Sigma Software is looking for a passionate Senior Data Engineer. We specialize in Cloud Technologies and love to make great products that improve our clients’ and partners’ Customer Communication and Engagement capabilities.  

Feeling thrilled? You are welcome to join!  


Our client delivers a SaaS solution for Customer Communication Management (CCM) for Telecommunications, Financial, and Energy sectors.  


This solution is a cloud-based multi-tenant platform that enables end-to-end capabilities to design, produce, store, and access very high volumes of documents (invoices, bills, promotions, etc.). 


  • At least 5 years of experience with AWS tools such as Glue, Athena, Redshift, S3, Lambda, CloudFormation  
  • 5+ years experience with Python  
  • Practical experience with Kafka  
  • Experience with Spark (PySpark) 
  • Good knowledge of SQL  
  • Familiarity with running and optimizing AWS Redshift DB  
  • Deep understanding of database development and data modeling  
  • Good knowledge of data engineering, e.g., dimensional modeling, ETL, reporting tools, data governance, data warehousing based on AWS infrastructure  
  • Experience with BI visualization tools (Tableau is the preferred one)  
  • Upper-Intermediate level of spoken and written English  


Nice to have:

  • Knowledge of Machine Learning and Data Science fields (Unsupervised and Supervised Learning, outlier detection)  
  • Experience in developing and deploying machine learning models 


  • Providing technical leadership to the team of data engineers implementing architectures based on AWS Cloud platforms (Glue, Lambda, S3, Redshift, etc.) 
  • Building Warehouse based on Redshift, Redshift Spectrum, and S3 services 
  • Designing, developing, deploying, optimizing, and maintaining data architecture and pipelines that adhere to defined ETL and data lake principles  
  • Discovering, understanding, and organizing disparate data sources and structuring them into clean data models with clear and understandable schemas 
  • Proposing solutions for real-time and batch data processing 
  • Interacting with developers and business stakeholders and the problems needing to be solved using Data Engineering methods 


  • Diversity of Domains & Businesses
  • Variety of technology
  • Health & Legal support
  • Active professional community
  • Continuous education and growing
  • Flexible schedule
  • Remote work
  • Outstanding offices (if you choose it)
  • Sports and community activities


Share this vacancy

apply now

apply now


    Drop your CV here, or
    Supports: DOC, DOCX, PDF, max size 5 Mb