Apply

Senior Data Engineer

Posted 4 months agoViewed

View full description

๐Ÿ’Ž Seniority level: Senior, 5+ years

๐Ÿ“ Location: Argentina

๐Ÿ” Industry: Nonprofit fundraising technology

๐Ÿข Company: GoFundMe๐Ÿ‘ฅ 251-500๐Ÿ’ฐ Series A almost 10 years ago๐Ÿซ‚ Last layoff over 2 years agoInternetCrowdfundingPeer to Peer

๐Ÿ—ฃ๏ธ Languages: English

โณ Experience: 5+ years

๐Ÿช„ Skills: AWSPythonSQLAWS EKSETLJavaKubernetesSnowflakeC++AirflowREST APICollaborationTerraform

Requirements:
  • 5+ years as a data engineer crafting, developing and maintaining business data warehouse alternatives consisting of structured and unstructured data.
  • Proficiency with building and orchestrating data pipelines using ETL/data preparation tools.
  • Expertise in orchestration tools like Airflow or Prefect.
  • Proficiency in connecting data through web APIs.
  • Proficiency in writing and optimizing SQL queries.
  • Solid knowledge of Python and other programming languages.
  • Experience with Snowflake is required.
  • Good understanding of database architecture and best practices.
Responsibilities:
  • Develop and maintain enterprise data warehouse (Snowflake).
  • Develop and orchestrate ELT data pipelines, sourcing data from databases and web APIs.
  • Integrate data from warehouse into third-party tools for actionable insights.
  • Develop and sustain REST API endpoints for data science products.
  • Provide ongoing maintenance and improvements to existing data solutions.
  • Monitor and optimize Snowflake usage for performance and cost-effectiveness.
Apply

Related Jobs

Apply

๐Ÿ“ Worldwide

๐Ÿ” Hospitality

๐Ÿข Company: Lighthouse

  • 4+ years of professional experience using Python, Java, or Scala for data processing (Python preferred)
  • You stay up-to-date with industry trends, emerging technologies, and best practices in data engineering.
  • Improve, manage, and teach standards for code maintainability and performance in code submitted and reviewed
  • Ship large features independently, generate architecture recommendations and have the ability to implement them
  • Great communication: Regularly achieve consensus amongst teams
  • Familiarity with GCP, Kubernetes (GKE preferred),ย  CI/CD tools (Gitlab CI preferred), familiarity with the concept of Lambda Architecture.
  • Experience with Apache Beam or Apache Spark for distributed data processing or event sourcing technologies like Apache Kafka.
  • Familiarity with monitoring tools like Grafana & Prometheus.
  • Design and develop scalable, reliable data pipelines using the Google Cloud stack.
  • Optimise data pipelines for performance and scalability.
  • Implement and maintain data governance frameworks, ensuring data accuracy, consistency, and compliance.
  • Monitor and troubleshoot data pipeline issues, implementing proactive measures for reliability and performance.
  • Collaborate with the DevOps team to automate deployments and improve developer experience on the data front.
  • Work with data science and analytics teams to enable them to bring their research to production grade data solutions, using technologies like airflow, dbt or MLflow (but not limited to)
  • As a part of a platform team, you will communicate effectively with teams across the entire engineering organisation, to provide them with reliable foundational data models and data tools.
  • Mentor and provide technical guidance to other engineers working with data.

PythonSQLApache AirflowETLGCPKubernetesApache KafkaData engineeringCI/CDMentoringTerraformScalaData modeling

Posted 3 days ago
Apply
Apply

๐Ÿ“ Worldwide

๐Ÿงญ Full-Time

NOT STATED
  • Own the design and implementation of cross-domain data models that support key business metrics and use cases.
  • Partner with analysts and data engineers to translate business logic into performant, well-documented dbt models.
  • Champion best practices in testing, documentation, CI/CD, and version control, and guide others in applying them.
  • Act as a technical mentor to other analytics engineers, supporting their development and reviewing their code.
  • Collaborate with central data platform and embedded teams to improve data quality, metric consistency, and lineage tracking.
  • Drive alignment on model architecture across domainsโ€”ensuring models are reusable, auditable, and trusted.
  • Identify and lead initiatives to reduce technical debt and modernise legacy reporting pipelines.
  • Contribute to the long-term vision of analytics engineering at Pleo and help shape our roadmap for scalability and impact.

SQLData AnalysisETLData engineeringCI/CDMentoringDocumentationData visualizationData modelingData analyticsData management

Posted 4 days ago
Apply
Apply

๐Ÿ“ Costa Rica, Brazil, Argentina, Chile, Mexico

๐Ÿ” Insider Risk Management

๐Ÿข Company: Teramind๐Ÿ‘ฅ 51-100Productivity ToolsSecurityCyber SecurityEnterprise SoftwareSoftware

  • 6+ years of experience in data engineering, with a proven track record of successfully delivering data-driven solutions.
  • Strong expertise in designing and building scalable data pipelines using industry-standard tools and frameworks.
  • Experience with big data technologies and distributed systems, such as Hadoop, Spark, or similar frameworks.
  • Proficient programming skills in languages such as Python, Java, or Scala, alongside a solid understanding of database management systems (SQL and NoSQL).
  • Understanding of data requirements for machine learning applications and how to optimize data for model training.
  • Experience with security data processing and compliance standards is preferred, ensuring that data handling meets industry regulations and best practices.
  • Design and implement robust data architecture tailored for AI-driven features, ensuring it meets the evolving needs of our platform.
  • Build and maintain efficient data pipelines for processing user activity data, ensuring data flows seamlessly throughout our systems.
  • Develop comprehensive systems for data storage, retrieval, and processing, facilitating quick and reliable access to information.
  • Ensure high standards of data quality and availability, enabling machine learning models to produce accurate and actionable insights.
  • Enhance the performance and scalability of our data infrastructure to accommodate growing data demands and user activity.
  • Work closely with data scientists and machine learning engineers to understand their data requirements and ensure data solutions are tailored to their needs.

PythonSQLApache HadoopETLMachine LearningAzureData engineeringNosqlComplianceScalaData visualizationData modelingData management

Posted 6 days ago
Apply
Apply
๐Ÿ”ฅ Senior Data Engineer
Posted about 1 month ago

๐Ÿ“ Worldwide

๐Ÿงญ Full-Time

๐Ÿ’ธ 167471.0 USD per year

๐Ÿ” Software Development

๐Ÿข Company: Float.com

  • Expertise in ML, expert systems, and advanced algorithms (e.g., pattern matching, optimization) with applied experience in Scheduling, Recommendations, or Personalization.
  • Proficient in Python or Java and comfortable with SQL and Javascript/Typescript.
  • Experience with large-scale data pipelines and stream processing (e.g., Kafka, Debezium, Flink).
  • Skilled in data integration, cleaning, and validation.
  • Familiar with vector and graph databases (e.g., Neo4j).
  • Lead technical viability discussions:
  • Develop and test proof-of-concepts for this project.
  • Analyse existing data:
  • Evaluate our data streaming pipeline: Y
  • Lead technical discussions related to optimization, pattern detection, and AI, serving as the primary point of contact for these areas within Float.
  • Develop and implement advanced algorithms to enhance the Resource Recommendation Engine and other product features, initially focused on pattern detection and optimization.
  • Design, implement, and maintain our streaming data architecture to support real-time data processing and analytics, ensuring data integrity and reliability.
  • Establish best practices and standards for optimization, AI, and data engineering development within the organization.
  • Mentor and train team members on optimization, AI, and data engineering concepts and techniques, fostering a culture of continuous learning and innovation.
  • Stay updated with the latest trends and related technologies, and proactively identify opportunities to incorporate them into Float's solutions.

PythonSQLKafkaMachine LearningAlgorithmsData engineering

Posted about 1 month ago
Apply
Apply
๐Ÿ”ฅ Senior Data Engineer
Posted about 1 month ago

๐Ÿ“ Europe, APAC, Americas

๐Ÿงญ Full-Time

๐Ÿ” Software Development

๐Ÿข Company: Docker๐Ÿ‘ฅ 251-500๐Ÿ’ฐ $105,000,000 Series C about 3 years agoDeveloper ToolsDeveloper PlatformInformation TechnologySoftware

  • 4+ years of relevant industry experience
  • Experience with data modeling and building scalable pipelines
  • Proficiency with Snowflake or BigQuery
  • Experience with data governance and security controls
  • Experience creating ETL scripts using Python and SQL
  • Familiarity with a cloud ecosystem: AWS/Azure/Google Cloud
  • Experience with Tableau or Looker
  • Manage and develop ETL jobs, warehouse, and event collection tools
  • Build and manage the Central Data Model for reporting
  • Integrate emerging methodologies and technologies
  • Build data pipelines for ML and AI projects
  • Contribute to SOC2 compliance across the data platform
  • Document technical architecture

PythonSQLETLSnowflakeAirflowData engineeringData visualizationData modeling

Posted about 1 month ago
Apply
Apply

๐Ÿ“ Worldwide

๐Ÿ” Event Technology

NOT STATED
NOT STATED

AWSDockerPostgreSQLPythonSQLApache AirflowCloud ComputingData AnalysisETLKubernetesAlgorithmsApache KafkaData engineeringData StructuresCI/CDRESTful APIsMicroservicesData visualizationData modeling

Posted 3 months ago
Apply
Apply

๐Ÿ“ Argentina, Colombia, Costa Rica, Mexico

๐Ÿ” Data Analytics

  • Proficient with SQL and data visualization tools (i.e. Tableau, PowerBI, Google Data Studio).
  • Programming skills mainly SQL.
  • Knowledge and experience with Python and/or R a plus.
  • Experience with Alteryx a plus.
  • Experience working with Google Cloud and AWS a plus.
  • Analyze data and consult with subject matter experts to design and develop business rules for data processing.
  • Setup and/or maintain existing dataflows in data wrangling tools like Alteryx or Google Dataprep.
  • Create and/or maintain SQL scripts.
  • Monitor, troubleshoot, and remediate data quality across marketing data systems.
  • Design and execute data quality checks.
  • Maintain ongoing management and stewardship of data governance, processing, and reporting.
  • Govern taxonomy additions, application, and use.
  • Serve as a knowledge expert for operational processes and identify improvement areas.
  • Evaluate opportunities for simplification and/or automation for reporting and processes.

AWSPythonSQLData AnalysisGCPMicrosoft Power BITableauAmazon Web ServicesData engineering

Posted 5 months ago
Apply