Knowledge in Big Data technologies, solutions, and concepts (Spark, Hive, MapReduce) and multiple languages (YAML, Python) Experience with Airflow, Spark, AWS and Databricks Strong foundation in software engineering principles, with experience working on data-centric systems Proficiency in Python, or one of the main programming languages, and a passion for writing clean and maintainable code Strong knowledge in optimizing SQL query performance Has experience in building multidimensional data models (Star and/or Snowflake schema) Understanding of the data lifecycle and concepts such as lineage, governance, privacy, retention, anonymization, etc. Knowledge in infrastructure areas such as containers and orchestration (Kubernetes, ECS), CI/CD strategies, infrastructure as code (Terraform), observability (Prometheus, Grafana), among others Excellent communication skills, proactively sharing and collaborating with both technical and non-technical stakeholders to translate business needs into scalable data solutions Curiosity, detail-orientation, and thrive in a fast-paced, data-driven environment