Big Data Engineer

Brazil Colombia Latin America Mexico Big Data Engineering

Required skills

Python / strong
SQL / strong
PySpark / good
AWS / strong
English / strong

Join the AdTech Competence Center at Sigma Software, a team of over 300 experts who deliver innovative, high-load, and data-driven advertising technology solutions. Since 2008, we’ve been helping leading AdTech companies and startups design, build, and scale their technology products.

We focus on fostering deep domain expertise, building long-term client partnerships, and growing together as a global team of professionals who are passionate about AdTech, data, and cloud-based solutions.

Does this sound like an exciting opportunity? Keep reading, and let’s discuss your future role!

Customer

Our client is an international AdTech company that develops modern, privacy-safe, and data-driven advertising platforms. The team works with AWS and other cutting-edge data technologies to build scalable, high-performance systems.

Project

The project revolves around the development of a next-generation AdTech platform that powers real-time, data-driven advertising. It leverages AWS, Python, and distributed data frameworks to process large-scale datasets efficiently and securely, enabling businesses to make smarter, faster, and more informed marketing decisions.

Requirements

  • 2-4 years of experience as a Data Engineer or Back-end Developer
  • Strong hands-on experience with Python and SQL
  • Experience with AWS data services (S3, DynamoDB, Lambda, Glue)
  • Familiarity with NoSQL databases and API integrations
  • Basic understanding of PySpark or similar distributed frameworks
  • Analytical mindset and proactive problem-solving skills
  • Fluent in English (Upper-Intermediate level or higher)

WILL BE A PLUS

  • Experience with Airflow or other orchestration tools
  • Interest in big data performance tuning and cloud optimization

 

Personal Profile

  • Strong communication and collaboration skills
  • Self-motivated, responsible, and able to work independently

Responsibilities

  • Develop and maintain ETL pipelines and data integration services using Python and SQL
  • Work with AWS services (S3, DynamoDB, Lambda, Glue) and NoSQL databases (MongoDB, DynamoDB)
  • Design, optimize, and validate data flows, ensuring data quality across systems
  • Collaborate with Senior engineers on architecture and performance improvements
  • Troubleshoot production data issues and perform root cause analyses.
  • Contribute to the continuous improvement of development practices and performance monitoring

WHY US

  • Diversity of Domains & Businesses
  • Variety of technology
  • Health & Legal support
  • Active professional community
  • Continuous education and growing
  • Flexible schedule
  • Remote work
  • Outstanding offices (if you choose it)
  • Sports and community activities

REF3724E

Share this vacancy

apply now

apply now

    OR

    Drop your CV here, or

    Supports: DOC, DOCX, PDF, max size 5 Mb

    Take a quiz

    Take a quiz

      Was it comfortable to apply the CV?


      How did you find us?




      Did you hear about us before visiting the site?