Apply

Senior Data Engineer

Posted 1 day agoViewed

View full description

💎 Seniority level: Senior, 3+ years

📍 Location: States of São Paulo and Rio Grande do Sul, Rio de Janeiro, Belo Horizonte

🔍 Industry: Data Engineering

🏢 Company: TELUS Digital Brazil

🗣️ Languages: English

⏳ Experience: 3+ years

🪄 Skills: AWSDockerPythonSQLAgileApache AirflowCloud ComputingETLKubernetesData engineeringData scienceCommunication SkillsAnalytical SkillsTeamworkData modelingEnglish communication

Requirements:
  • At least 3 years of experience as Data Engineer
  • Have actively participated in the design and development of data architectures
  • Hands-on experience in developing and optimizing data pipelines
  • Experience working with databases and data modeling projects, as well as practical experience utilizing SQL
  • Effective English communication - able to explain technical and non-technical concepts to different audiences
  • Experience with a general-purpose programming language such as Python or Scala
  • Ability to work well in teams and interact effectively with others
  • Ability to work independently and manage multiple tasks simultaneously while meeting deadlines
Responsibilities:
  • Develop and optimize scalable, high-performing, secure, and reliable data pipelines that address diverse business needs and considerations
  • Identify opportunities to enhance internal processes, implement automation to streamline manual tasks, and contribute to infrastructure redesign
  • Act as a guide and mentor to junior engineers, supporting their professional growth and fostering an inclusive working environment
  • Collaborate with cross-functional teams to ensure data quality and support data-driven decision-making to strive for greater functionality in our data systems
  • Collaborate with project managers and product owners to assist in prioritizing, estimating, and planning development tasks
  • Provide constructive feedback, and share expertise with fellow team members, fostering mutual growth and learning
  • Engage in ongoing research and adoption of new technologies, libraries, frameworks, and best practices to enhance the capabilities of the data team
  • Demonstrate a commitment to accessibility and ensure that your work considers and positively impacts others
Apply

Related Jobs

Apply
🔥 Senior Data Engineer
Posted 18 days ago

📍 Europe, APAC, Americas

🧭 Full-Time

🔍 Software Development

🏢 Company: Docker👥 251-500💰 $105,000,000 Series C almost 3 years agoDeveloper ToolsDeveloper PlatformInformation TechnologySoftware

  • 4+ years of relevant industry experience
  • Experience with data modeling and building scalable pipelines
  • Proficiency with Snowflake or BigQuery
  • Experience with data governance and security controls
  • Experience creating ETL scripts using Python and SQL
  • Familiarity with a cloud ecosystem: AWS/Azure/Google Cloud
  • Experience with Tableau or Looker
  • Manage and develop ETL jobs, warehouse, and event collection tools
  • Build and manage the Central Data Model for reporting
  • Integrate emerging methodologies and technologies
  • Build data pipelines for ML and AI projects
  • Contribute to SOC2 compliance across the data platform
  • Document technical architecture

PythonSQLETLSnowflakeAirflowData engineeringData visualizationData modeling

Posted 18 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 24 days ago

📍 LATAM

🧭 Full-Time

🔍 Fintech

  • 7+ years of experience with ETL, SQL, PowerBI, Tableau, or similar technologies
  • Strong understanding of data modeling, database design, and SQL
  • Experience working with Apache Kafka or MSK solution
  • Extensive experience delivering solutions on Snowflake or other cloud-based data warehouses, and generally an understanding of data warehousing technologies and event-driven architectures
  • Proficiency in Python/R and familiarity with modern data engineering practices
  • Strong analytical and problem-solving skills with a focus on delivering high-quality solutions
  • Experience with machine learning (ML) and building natural language interfaces for business data
  • Proven track record in a fast-paced Agile development environment
  • Ability to work autonomously while effectively engaging with multiple business teams and stakeholders
  • Design, develop, and maintain scalable data pipelines for ingesting, processing, and transforming large volumes of data from various sources
  • Implement data ingestion frameworks to efficiently collect data from internal and external sources
  • Optimize data pipelines for performance, reliability, and scalability
  • Develop and deliver scalable, unit-tested data assets and products that empower analysts and drive business workflows
  • Evaluate and continuously improve existing data products and solutions for performance, scalability and security
  • Experience in data quality management, including software implementation for data correction, reconciliation, and validation of data workflows to ensure accuracy and integrity in the data warehouse
  • Collaborate with engineers, data scientists, and product managers to analyze edge cases and plan for architectural scalability
  • Lead the deployment and maintenance of multiple data solutions such as business dashboards and machine learning models
  • Champion best practices in data development, design, and architecture
  • Conduct comprehensive code reviews, providing mentorship and meaningful feedback to junior team members
  • Collaborate with other team members to create and maintain process documentation, data flows, and ETL diagrams for both new and existing data pipelines and processes
  • Monitor data pipelines for performance, reliability, and security issues
  • Implement logging, monitoring, and alerting systems to detect and respond to data-related issues proactively
  • Drive the team's Agile process, ensuring high standards of productivity and collaboration

AWSPythonSQLAgileCloud ComputingETLMachine LearningSnowflakeTableauApache KafkaData engineeringREST APIData visualizationData modeling

Posted 24 days ago
Apply
Apply

📍 LatAm

🧭 Full-Time

🔍 B2B data and intelligence

🏢 Company: Truelogic👥 101-250ConsultingWeb DevelopmentWeb DesignSoftware

  • 8+ years of experience as a Data/BI engineer.
  • Experience developing data pipelines with Airflow or equivalent code-based orchestration software.
  • Strong SQL abilities and hands-on experience with SQL and no-SQL DBs, performing analysis and performance optimizations.
  • Hands-on experience in Python or equivalent programming language
  • Experience with data warehouse solutions (like BigQuery/ Redshift/ Snowflake)
  • Experience with data modeling, data catalog concepts, data formats, data pipelines/ETL design, implementation, and maintenance.
  • Experience with AWS/GCP cloud services such as GCS/S3, Lambda/Cloud Function, EMR/Dataproc, Glue/Dataflow, and Athena.
  • Experience in Quality Checks
  • Experience in DBT
  • EFront Knowledge
  • Strong and Clear Communication Skills
  • Building, and continuously improving our data gathering, modeling, reporting capabilities, and self-service data platforms.
  • Working closely with Data Engineers, Data Analysts, Data Scientists, Product Owners, and Domain Experts to identify data needs.

AWSPythonSQLCloud ComputingETLSnowflakeAirflowData engineeringCommunication SkillsData modeling

Posted 29 days ago
Apply
Apply
🔥 Senior Data Engineer (AdTech)
Posted about 1 month ago

📍 Brazil, Argentina, Peru, Colombia, Uruguay

🔍 AdTech

🏢 Company: Workana Premium

  • 6+ years of experience in data engineering or related roles, preferably within the AdTech industry.
  • Expertise in SQL and experience with relational databases such as BigQuery and SpannerDB or similar.
  • Experience with GCP services, including Dataflow, Pub/Sub, and Cloud Storage.
  • Experience building and optimizing ETL/ELT pipelines in support of audience segmentation and analytics use cases.
  • Experience with Docker and Kubernetes for containerization and orchestration.
  • Familiarity with message queues or event-streaming tools, such as Kafka or Pub/Sub.
  • Knowledge of data modeling, schema design, and query optimization for performance at scale.
  • Programming experience in languages like Python, Go, or Java for data engineering tasks.
  • Build and optimize data pipelines and ETL/ELT processes to support AdTech products: Insights, Activation, and Measurement.
  • Leverage GCP tools like BigQuery, SpannerDB, and Dataflow to process and analyze real-time consumer-permissioned data.
  • Design scalable and robust data solutions to power audience segmentation, targeted advertising, and outcome measurement.
  • Develop and maintain APIs to facilitate data sharing and integration across the platform’s products.
  • Optimize database and query performance to ensure efficient delivery of advertising insights and analytics.
  • Work with event-driven architectures using tools like Pub/Sub or Kafka to ensure seamless data processing.
  • Proactively monitor and troubleshoot issues to maintain data accuracy, security, and performance.
  • Drive innovation by identifying opportunities to enhance the platform’s capabilities in audience targeting and measurement.

DockerPythonSQLETLGCPJavaKafkaKubernetesGoData modeling

Posted about 1 month ago
Apply
Apply
🔥 Senior Data Engineer
Posted about 2 months ago

📍 Worldwide

🔍 Event technology

  • Experience in data engineering and building data pipelines.
  • Proficiency in programming languages like Python, Java, or Scala.
  • Familiarity with cloud platforms and data architecture design.
  • Design and develop data solutions to enhance the functionality of the platform.
  • Implement efficient data pipelines and ETL processes.
  • Collaborate with cross-functional teams to define data requirements.

AWSDockerPostgreSQLPythonSQLApache AirflowCloud ComputingData AnalysisETLKubernetesAlgorithmsApache KafkaData engineeringData StructuresCI/CDRESTful APIsMicroservicesData visualizationData modeling

Posted about 2 months ago
Apply
Apply
🔥 Senior Data Engineer
Posted about 2 months ago

📍 South Africa, Mauritius, Kenya, Nigeria

🔍 Technology, Marketplaces

  • BSc degree in Computer Science, Information Systems, Engineering, or related technical field or equivalent work experience.
  • 3+ years related work experience.
  • Minimum of 2 years experience building and optimizing ‘big data’ data pipelines, architectures and maintaining data sets.
  • Experienced in Python.
  • Experienced in SQL (PostgreSQL, MS SQL).
  • Experienced in using cloud services: AWS, Azure or GCP.
  • Proficiency in version control, CI/CD and GitHub.
  • Understanding/experience in Glue and PySpark highly desirable.
  • Experience in managing data life cycle.
  • Proficiency in manipulating, processing and architecting large disconnected data sets for analytical requirements.
  • Ability to maintain and optimise processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Good understanding of data management principles - data quality assurance and governance.
  • Strong analytical skills related to working with unstructured datasets.
  • Understanding of message queuing, stream processing, and highly scalable ‘big data’ datastores.
  • Strong attention to detail.
  • Good communication and interpersonal skills.
  • Suggest efficiencies and execute on implementation of internal process improvements in automating manual processes.
  • Implement enhancements and new features across data systems.
  • Improve streamline processes within data systems with support from Senior Data Engineer.
  • Test CI/CD process for optimal data pipelines.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Highly efficient in ETL processes.
  • Develop and conduct unit tests on data pipelines as well as ensuring data consistency.
  • Develop and maintain automated monitoring solutions.
  • Support reporting and analytics infrastructure.
  • Maintain data quality and data governance as well as upkeep of overall maintenance of data infrastructure systems.
  • Maintain data warehouse and data lake metadata, data catalogue, and user documentation for internal business users.
  • Ensure best practice is implemented and maintained on database.

AWSPostgreSQLPythonSQLETLGitCI/CD

Posted about 2 months ago
Apply
Apply
🔥 Senior Data Engineer
Posted 3 months ago

📍 Brazil, Argentina

🧭 Full-Time

🔍 Manufacturing services

🏢 Company: Xometry👥 501-1000💰 $75,000,000 Series E over 4 years agoArtificial Intelligence (AI)3D PrintingIndustrial EngineeringSoftware

  • Bachelor's degree required, or relevant experience.
  • 3-5+ years of prior experience as a software engineer or data engineer in a fast-paced, technical, problem-solving environment.
  • Cloud Data Warehouse experience - Snowflake.
  • Expert in SQL.
  • Expert in ETL, data modeling, and version control, dbt and GitHub preferred.
  • Data modeling best practices for transactional and analytical processing.
  • Experience with data extraction tools (Fivetran, Airbyte, etc.).
  • Experience with event tracking software (Segment, Tealium, etc).
  • Experience with programming language like Python, JavaScript.
  • Experience with Business Intelligence tools, Looker preferred.
  • Ability to communicate effectively and influence others.
  • Ability to work in a fast-paced environment and shift gears quickly.
  • Must be able to work core aligned hours to US Eastern Time / GMT-5.
  • Close collaboration with other engineers and product managers to become a valued member of an autonomous, cross-functional team.
  • Build analytics models that utilize the data pipeline to provide actionable insights into key business performance metrics.
  • Develop data models for analytics and data scientist team members that assist them in building and optimizing data.
  • Maintain data pipelines and perform any changes or alterations as requested.
  • Develop deployment and release of functionality through software integration to support devops and CI/CD pipelines.

PythonSQLBusiness IntelligenceETLJavascriptSnowflakeCollaborationCI/CDDevOps

Posted 3 months ago
Apply
Apply
🔥 Senior Data Engineer
Posted 4 months ago

📍 Mexico, Gibraltar, Colombia, USA, Brazil, Argentina

🧭 Full-Time

🔍 FinTech

🏢 Company: Bitso

  • Proven English fluency.
  • 3+ years professional working experience with analytics, ETLs, and data systems.
  • 3+ years with SQL databases, data lake, big data, and cloud infrastructure.
  • 3+ years experience with Spark.
  • BS or Master's in Computer Science or similar.
  • Strong proficiency in SQL, Python, and AWS.
  • Strong data modeling skills.
  • Build processes required for optimal extraction, transformation, and loading of data from various sources using SQL, Python, Spark.
  • Identify, design, and implement internal process improvements while optimizing data delivery and redesigning infrastructure for scalability.
  • Ensure data integrity, quality, and security.
  • Work with stakeholders to assist with data-related technical issues and support their data needs.
  • Manage data separation and security across multiple data sources.

AWSPythonSQLBusiness IntelligenceMachine LearningData engineeringData StructuresSparkCommunication SkillsData modeling

Posted 4 months ago
Apply