Apply

Senior Data Engineer

Posted 3 months agoViewed

View full description

💎 Seniority level: Senior, 5+ years

📍 Location: Argentina

🔍 Industry: Nonprofit fundraising technology

🏢 Company: GoFundMe👥 251-500💰 Series A over 9 years ago🫂 Last layoff over 2 years agoInternetCrowdfundingPeer to Peer

🗣️ Languages: English

⏳ Experience: 5+ years

🪄 Skills: AWSPythonSQLAWS EKSETLJavaKubernetesSnowflakeC++AirflowREST APICollaborationTerraform

Requirements:
  • 5+ years as a data engineer crafting, developing and maintaining business data warehouse alternatives consisting of structured and unstructured data.
  • Proficiency with building and orchestrating data pipelines using ETL/data preparation tools.
  • Expertise in orchestration tools like Airflow or Prefect.
  • Proficiency in connecting data through web APIs.
  • Proficiency in writing and optimizing SQL queries.
  • Solid knowledge of Python and other programming languages.
  • Experience with Snowflake is required.
  • Good understanding of database architecture and best practices.
Responsibilities:
  • Develop and maintain enterprise data warehouse (Snowflake).
  • Develop and orchestrate ELT data pipelines, sourcing data from databases and web APIs.
  • Integrate data from warehouse into third-party tools for actionable insights.
  • Develop and sustain REST API endpoints for data science products.
  • Provide ongoing maintenance and improvements to existing data solutions.
  • Monitor and optimize Snowflake usage for performance and cost-effectiveness.
Apply

Related Jobs

Apply
🔥 Senior Data Engineer
Posted 3 days ago

📍 Europe, APAC, Americas

🧭 Full-Time

🔍 Software Development

🏢 Company: Docker👥 251-500💰 $105,000,000 Series C almost 3 years agoDeveloper ToolsDeveloper PlatformInformation TechnologySoftware

  • 4+ years of relevant industry experience
  • Experience with data modeling and building scalable pipelines
  • Proficiency with Snowflake or BigQuery
  • Experience with data governance and security controls
  • Experience creating ETL scripts using Python and SQL
  • Familiarity with a cloud ecosystem: AWS/Azure/Google Cloud
  • Experience with Tableau or Looker
  • Manage and develop ETL jobs, warehouse, and event collection tools
  • Build and manage the Central Data Model for reporting
  • Integrate emerging methodologies and technologies
  • Build data pipelines for ML and AI projects
  • Contribute to SOC2 compliance across the data platform
  • Document technical architecture

PythonSQLETLSnowflakeAirflowData engineeringData visualizationData modeling

Posted 3 days ago
Apply
Apply

📍 Latin America

🧭 Full-Time

🔍 Insurance Industry

🏢 Company: Nearsure👥 501-1000Staffing AgencyOutsourcingSoftware

  • Bachelor's Degree in Computer Science or related field
  • 5+ years experience with Python and Scala for data engineering
  • 5+ years experience with AWS ecosystem
  • 3+ years experience with Kubernetes
  • 2+ years experience with Scala and Spark for data processing
  • Expert in SQL programming
  • Design, develop, maintain, and enhance data engineering solutions
  • Build scalable pipelines focusing on data quality
  • Integrate business knowledge with technical functionalities
  • Collaborate with application engineers and data scientists
  • Automate existing code and processes

AWSPythonSQLApache AirflowETLKubernetesSparkScala

Posted 8 days ago
Apply
Apply

📍 LatAm

🧭 Full-Time

🔍 B2B data and intelligence

🏢 Company: Truelogic👥 101-250ConsultingWeb DevelopmentWeb DesignSoftware

  • 8+ years of experience as a Data/BI engineer.
  • Experience developing data pipelines with Airflow or equivalent code-based orchestration software.
  • Strong SQL abilities and hands-on experience with SQL and no-SQL DBs, performing analysis and performance optimizations.
  • Hands-on experience in Python or equivalent programming language
  • Experience with data warehouse solutions (like BigQuery/ Redshift/ Snowflake)
  • Experience with data modeling, data catalog concepts, data formats, data pipelines/ETL design, implementation, and maintenance.
  • Experience with AWS/GCP cloud services such as GCS/S3, Lambda/Cloud Function, EMR/Dataproc, Glue/Dataflow, and Athena.
  • Experience in Quality Checks
  • Experience in DBT
  • EFront Knowledge
  • Strong and Clear Communication Skills
  • Building, and continuously improving our data gathering, modeling, reporting capabilities, and self-service data platforms.
  • Working closely with Data Engineers, Data Analysts, Data Scientists, Product Owners, and Domain Experts to identify data needs.

AWSPythonSQLCloud ComputingETLSnowflakeAirflowData engineeringCommunication SkillsData modeling

Posted 13 days ago
Apply
Apply

📍 Brazil, Argentina, Peru, Colombia, Uruguay

🔍 AdTech

🏢 Company: Workana Premium

  • 6+ years of experience in data engineering or related roles, preferably within the AdTech industry.
  • Expertise in SQL and experience with relational databases such as BigQuery and SpannerDB or similar.
  • Experience with GCP services, including Dataflow, Pub/Sub, and Cloud Storage.
  • Experience building and optimizing ETL/ELT pipelines in support of audience segmentation and analytics use cases.
  • Experience with Docker and Kubernetes for containerization and orchestration.
  • Familiarity with message queues or event-streaming tools, such as Kafka or Pub/Sub.
  • Knowledge of data modeling, schema design, and query optimization for performance at scale.
  • Programming experience in languages like Python, Go, or Java for data engineering tasks.
  • Build and optimize data pipelines and ETL/ELT processes to support AdTech products: Insights, Activation, and Measurement.
  • Leverage GCP tools like BigQuery, SpannerDB, and Dataflow to process and analyze real-time consumer-permissioned data.
  • Design scalable and robust data solutions to power audience segmentation, targeted advertising, and outcome measurement.
  • Develop and maintain APIs to facilitate data sharing and integration across the platform’s products.
  • Optimize database and query performance to ensure efficient delivery of advertising insights and analytics.
  • Work with event-driven architectures using tools like Pub/Sub or Kafka to ensure seamless data processing.
  • Proactively monitor and troubleshoot issues to maintain data accuracy, security, and performance.
  • Drive innovation by identifying opportunities to enhance the platform’s capabilities in audience targeting and measurement.

DockerPythonSQLETLGCPJavaKafkaKubernetesGoData modeling

Posted 27 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted about 1 month ago

📍 South Africa, Mauritius, Kenya, Nigeria

🔍 Technology, Marketplaces

  • BSc degree in Computer Science, Information Systems, Engineering, or related technical field or equivalent work experience.
  • 3+ years related work experience.
  • Minimum of 2 years experience building and optimizing ‘big data’ data pipelines, architectures and maintaining data sets.
  • Experienced in Python.
  • Experienced in SQL (PostgreSQL, MS SQL).
  • Experienced in using cloud services: AWS, Azure or GCP.
  • Proficiency in version control, CI/CD and GitHub.
  • Understanding/experience in Glue and PySpark highly desirable.
  • Experience in managing data life cycle.
  • Proficiency in manipulating, processing and architecting large disconnected data sets for analytical requirements.
  • Ability to maintain and optimise processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Good understanding of data management principles - data quality assurance and governance.
  • Strong analytical skills related to working with unstructured datasets.
  • Understanding of message queuing, stream processing, and highly scalable ‘big data’ datastores.
  • Strong attention to detail.
  • Good communication and interpersonal skills.
  • Suggest efficiencies and execute on implementation of internal process improvements in automating manual processes.
  • Implement enhancements and new features across data systems.
  • Improve streamline processes within data systems with support from Senior Data Engineer.
  • Test CI/CD process for optimal data pipelines.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Highly efficient in ETL processes.
  • Develop and conduct unit tests on data pipelines as well as ensuring data consistency.
  • Develop and maintain automated monitoring solutions.
  • Support reporting and analytics infrastructure.
  • Maintain data quality and data governance as well as upkeep of overall maintenance of data infrastructure systems.
  • Maintain data warehouse and data lake metadata, data catalogue, and user documentation for internal business users.
  • Ensure best practice is implemented and maintained on database.

AWSPostgreSQLPythonSQLETLGitCI/CD

Posted about 1 month ago
Apply
Apply
🔥 Senior Data Engineer
Posted 3 months ago

📍 Brazil, Argentina

🧭 Full-Time

🔍 Manufacturing services

🏢 Company: Xometry👥 501-1000💰 $75,000,000 Series E over 4 years agoArtificial Intelligence (AI)3D PrintingIndustrial EngineeringSoftware

  • Bachelor's degree required, or relevant experience.
  • 3-5+ years of prior experience as a software engineer or data engineer in a fast-paced, technical, problem-solving environment.
  • Cloud Data Warehouse experience - Snowflake.
  • Expert in SQL.
  • Expert in ETL, data modeling, and version control, dbt and GitHub preferred.
  • Data modeling best practices for transactional and analytical processing.
  • Experience with data extraction tools (Fivetran, Airbyte, etc.).
  • Experience with event tracking software (Segment, Tealium, etc).
  • Experience with programming language like Python, JavaScript.
  • Experience with Business Intelligence tools, Looker preferred.
  • Ability to communicate effectively and influence others.
  • Ability to work in a fast-paced environment and shift gears quickly.
  • Must be able to work core aligned hours to US Eastern Time / GMT-5.
  • Close collaboration with other engineers and product managers to become a valued member of an autonomous, cross-functional team.
  • Build analytics models that utilize the data pipeline to provide actionable insights into key business performance metrics.
  • Develop data models for analytics and data scientist team members that assist them in building and optimizing data.
  • Maintain data pipelines and perform any changes or alterations as requested.
  • Develop deployment and release of functionality through software integration to support devops and CI/CD pipelines.

PythonSQLBusiness IntelligenceETLJavascriptSnowflakeCollaborationCI/CDDevOps

Posted 3 months ago
Apply
Apply

📍 Argentina, Colombia, Costa Rica, Mexico

🔍 Data Analytics

  • Proficient with SQL and data visualization tools (i.e. Tableau, PowerBI, Google Data Studio).
  • Programming skills mainly SQL.
  • Knowledge and experience with Python and/or R a plus.
  • Experience with Alteryx a plus.
  • Experience working with Google Cloud and AWS a plus.
  • Analyze data and consult with subject matter experts to design and develop business rules for data processing.
  • Setup and/or maintain existing dataflows in data wrangling tools like Alteryx or Google Dataprep.
  • Create and/or maintain SQL scripts.
  • Monitor, troubleshoot, and remediate data quality across marketing data systems.
  • Design and execute data quality checks.
  • Maintain ongoing management and stewardship of data governance, processing, and reporting.
  • Govern taxonomy additions, application, and use.
  • Serve as a knowledge expert for operational processes and identify improvement areas.
  • Evaluate opportunities for simplification and/or automation for reporting and processes.

AWSPythonSQLData AnalysisGCPMicrosoft Power BITableauAmazon Web ServicesData engineering

Posted 4 months ago
Apply
Apply
🔥 Senior Data Engineer
Posted 4 months ago

📍 Mexico, Gibraltar, Colombia, USA, Brazil, Argentina

🧭 Full-Time

🔍 FinTech

🏢 Company: Bitso

  • Proven English fluency.
  • 3+ years professional working experience with analytics, ETLs, and data systems.
  • 3+ years with SQL databases, data lake, big data, and cloud infrastructure.
  • 3+ years experience with Spark.
  • BS or Master's in Computer Science or similar.
  • Strong proficiency in SQL, Python, and AWS.
  • Strong data modeling skills.
  • Build processes required for optimal extraction, transformation, and loading of data from various sources using SQL, Python, Spark.
  • Identify, design, and implement internal process improvements while optimizing data delivery and redesigning infrastructure for scalability.
  • Ensure data integrity, quality, and security.
  • Work with stakeholders to assist with data-related technical issues and support their data needs.
  • Manage data separation and security across multiple data sources.

AWSPythonSQLBusiness IntelligenceMachine LearningData engineeringData StructuresSparkCommunication SkillsData modeling

Posted 4 months ago
Apply