Senior Big Data Engineer

Argentina Brazil Colombia Latin America Mexico Big Data Engineering

Required skills

Python / strong
SQL / strong
PySpark / good
AWS / strong
English / strong

Join Sigma Software’s AdTech Competence Center — a 300+ team of experts delivering innovative, high-load, and data-driven advertising technology solutions. Since 2008, we’ve been helping leading AdTech companies and startups design, build, and scale their technology products.

We focus on fostering deep domain expertise, building long-term client partnerships, and growing together as a global team of professionals passionate about AdTech, data, and cloud-based solutions.

Does this sound like an exciting opportunity? Keep reading, and let’s discuss your future role!

Customer

Our client is an international AdTech company developing modern, privacy-safe, and data-driven advertising platforms. The team works with AWS and cutting-edge data technologies to build scalable, high-performance systems.

Project

The project revolves around the development of a next-generation AdTech platform that powers real-time, data-driven advertising. It leverages AWS, Python, and distributed data frameworks to process large-scale datasets efficiently and securely — enabling businesses to make smarter, faster, and more informed marketing decisions.

Requirements

  • 5+ years of experience in data engineering or backend development
  • Strong knowledge of Python and SQL
  • Hands-on experience with AWS (S3, Glue, Lambda, DynamoDB)
  • Practical knowledge of PySpark or other distributed processing frameworks
  • Experience with NoSQL databases (MongoDB or DynamoDB)
  • Good understanding of ETL principles, data modeling, and performance optimization
  • Understanding of data security and compliance in cloud environments
  • Fluent in English (Upper-Intermediate level or higher)

Personal Profile

  • Strong communication and collaboration skills in cross-functional environments
  • Proactive, accountable, and driven to deliver high-quality results

Responsibilities

  • Design, develop, and maintain robust data pipelines and ETL processes using Python, SQL, and PySpark
  • Work with large-scale data storage on AWS (S3, DynamoDB, MongoDB)
  • Ensure high-quality, consistent, and reliable data flows between systems
  • Optimize performance, scalability, and cost efficiency of data solutions
  • Collaborate with backend developers and DevOps engineers to integrate and deploy data components
  • Implement monitoring, logging, and alerting for production data pipelines
  • Participate in architecture design, propose improvements, and mentor mid-level engineers.

 

WHY US

  • Diversity of Domains & Businesses
  • Variety of technology
  • Health & Legal support
  • Active professional community
  • Continuous education and growing
  • Flexible schedule
  • Remote work
  • Outstanding offices (if you choose it)
  • Sports and community activities

REF3723M

Share this vacancy

apply now

apply now

    OR

    Drop your CV here, or

    Supports: DOC, DOCX, PDF, max size 5 Mb

    Take a quiz

    Take a quiz

      Was it comfortable to apply the CV?


      How did you find us?




      Did you hear about us before visiting the site?