Apply

Senior Data Engineer

Posted about 2 months agoViewed

View full description

πŸ’Ž Seniority level: Senior, 5+ years

πŸ“ Location: Argentina

πŸ” Industry: Nonprofit fundraising technology

🏒 Company: GoFundMeπŸ‘₯ 251-500πŸ’° Series A over 9 years agoπŸ«‚ Last layoff about 2 years agoInternetCrowdfundingPeer to Peer

πŸ—£οΈ Languages: English

⏳ Experience: 5+ years

πŸͺ„ Skills: AWSPythonSQLAWS EKSETLJavaKubernetesSnowflakeC++AirflowREST APICollaborationTerraform

Requirements:
  • 5+ years as a data engineer crafting, developing and maintaining business data warehouse alternatives consisting of structured and unstructured data.
  • Proficiency with building and orchestrating data pipelines using ETL/data preparation tools.
  • Expertise in orchestration tools like Airflow or Prefect.
  • Proficiency in connecting data through web APIs.
  • Proficiency in writing and optimizing SQL queries.
  • Solid knowledge of Python and other programming languages.
  • Experience with Snowflake is required.
  • Good understanding of database architecture and best practices.
Responsibilities:
  • Develop and maintain enterprise data warehouse (Snowflake).
  • Develop and orchestrate ELT data pipelines, sourcing data from databases and web APIs.
  • Integrate data from warehouse into third-party tools for actionable insights.
  • Develop and sustain REST API endpoints for data science products.
  • Provide ongoing maintenance and improvements to existing data solutions.
  • Monitor and optimize Snowflake usage for performance and cost-effectiveness.
Apply

Related Jobs

Apply

πŸ“ South Africa, Mauritius, Kenya, Nigeria

πŸ” Technology, Marketplaces

  • BSc degree in Computer Science, Information Systems, Engineering, or related technical field or equivalent work experience.
  • 3+ years related work experience.
  • Minimum of 2 years experience building and optimizing β€˜big data’ data pipelines, architectures and maintaining data sets.
  • Experienced in Python.
  • Experienced in SQL (PostgreSQL, MS SQL).
  • Experienced in using cloud services: AWS, Azure or GCP.
  • Proficiency in version control, CI/CD and GitHub.
  • Understanding/experience in Glue and PySpark highly desirable.
  • Experience in managing data life cycle.
  • Proficiency in manipulating, processing and architecting large disconnected data sets for analytical requirements.
  • Ability to maintain and optimise processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Good understanding of data management principles - data quality assurance and governance.
  • Strong analytical skills related to working with unstructured datasets.
  • Understanding of message queuing, stream processing, and highly scalable β€˜big data’ datastores.
  • Strong attention to detail.
  • Good communication and interpersonal skills.

  • Suggest efficiencies and execute on implementation of internal process improvements in automating manual processes.
  • Implement enhancements and new features across data systems.
  • Improve streamline processes within data systems with support from Senior Data Engineer.
  • Test CI/CD process for optimal data pipelines.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Highly efficient in ETL processes.
  • Develop and conduct unit tests on data pipelines as well as ensuring data consistency.
  • Develop and maintain automated monitoring solutions.
  • Support reporting and analytics infrastructure.
  • Maintain data quality and data governance as well as upkeep of overall maintenance of data infrastructure systems.
  • Maintain data warehouse and data lake metadata, data catalogue, and user documentation for internal business users.
  • Ensure best practice is implemented and maintained on database.

AWSPostgreSQLPythonSQLETLGitCI/CD

Posted 9 days ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 2 months ago

πŸ“ Brazil, Argentina

🧭 Full-Time

πŸ” Manufacturing services

🏒 Company: XometryπŸ‘₯ 501-1000πŸ’° $75,000,000 Series E over 4 years agoArtificial Intelligence (AI)3D PrintingIndustrial EngineeringSoftware

  • Bachelor's degree required, or relevant experience.
  • 3-5+ years of prior experience as a software engineer or data engineer in a fast-paced, technical, problem-solving environment.
  • Cloud Data Warehouse experience - Snowflake.
  • Expert in SQL.
  • Expert in ETL, data modeling, and version control, dbt and GitHub preferred.
  • Data modeling best practices for transactional and analytical processing.
  • Experience with data extraction tools (Fivetran, Airbyte, etc.).
  • Experience with event tracking software (Segment, Tealium, etc).
  • Experience with programming language like Python, JavaScript.
  • Experience with Business Intelligence tools, Looker preferred.
  • Ability to communicate effectively and influence others.
  • Ability to work in a fast-paced environment and shift gears quickly.
  • Must be able to work core aligned hours to US Eastern Time / GMT-5.

  • Close collaboration with other engineers and product managers to become a valued member of an autonomous, cross-functional team.
  • Build analytics models that utilize the data pipeline to provide actionable insights into key business performance metrics.
  • Develop data models for analytics and data scientist team members that assist them in building and optimizing data.
  • Maintain data pipelines and perform any changes or alterations as requested.
  • Develop deployment and release of functionality through software integration to support devops and CI/CD pipelines.

PythonSQLBusiness IntelligenceETLJavascriptSnowflakeCollaborationCI/CDDevOps

Posted about 2 months ago
Apply
Apply
πŸ”₯ (830) Senior Data Engineer
Posted about 2 months ago

πŸ“ LATAM

πŸ” Staff augmentation

🏒 Company: NearsureπŸ‘₯ 501-1000Staffing AgencyOutsourcingSoftware

  • Bachelor's Degree in Computer Science, Engineering, or related field.
  • 5+ Years of experience working in Data Engineering.
  • 3+ Years of experience working with SQL.
  • 3+ Years of experience working with Python.
  • 2+ Years of experience working with Cloud.
  • 2+ Years of experience working with Microservices.
  • 2+ Years of experience working with Databases.
  • 2+ Years of experience working with CI/CD tools.
  • Advanced English Level is required.

  • Write code to test and validate business logic in SQL and code repositories.
  • Test back-end API interfaces for the financial platform.
  • Build developer productivity tools.
  • Provide risk assessments on identified defects.
  • Define quality engineering scope and maintain automated scripts.
  • Participate in Scrum meetings for status updates.
  • Identify and clarify information needs.
  • Organize and resolve current backlog issues.

PythonSQLSCRUMData engineeringCI/CDMicroservices

Posted about 2 months ago
Apply
Apply

πŸ“ Argentina, Colombia, Costa Rica, Mexico

πŸ” Data Analytics

  • Proficient with SQL and data visualization tools (i.e. Tableau, PowerBI, Google Data Studio).
  • Programming skills mainly SQL.
  • Knowledge and experience with Python and/or R a plus.
  • Experience with Alteryx a plus.
  • Experience working with Google Cloud and AWS a plus.

  • Analyze data and consult with subject matter experts to design and develop business rules for data processing.
  • Setup and/or maintain existing dataflows in data wrangling tools like Alteryx or Google Dataprep.
  • Create and/or maintain SQL scripts.
  • Monitor, troubleshoot, and remediate data quality across marketing data systems.
  • Design and execute data quality checks.
  • Maintain ongoing management and stewardship of data governance, processing, and reporting.
  • Govern taxonomy additions, application, and use.
  • Serve as a knowledge expert for operational processes and identify improvement areas.
  • Evaluate opportunities for simplification and/or automation for reporting and processes.

AWSPythonSQLData AnalysisGCPMicrosoft Power BITableauAmazon Web ServicesData engineering

Posted 2 months ago
Apply
Apply

πŸ“ Mexico, Gibraltar, Colombia, USA, Brazil, Argentina

🧭 Full-Time

πŸ” FinTech

🏒 Company: Bitso

  • Proven English fluency.
  • 3+ years professional working experience with analytics, ETLs, and data systems.
  • 3+ years with SQL databases, data lake, big data, and cloud infrastructure.
  • 3+ years experience with Spark.
  • BS or Master's in Computer Science or similar.
  • Strong proficiency in SQL, Python, and AWS.
  • Strong data modeling skills.

  • Build processes required for optimal extraction, transformation, and loading of data from various sources using SQL, Python, Spark.
  • Identify, design, and implement internal process improvements while optimizing data delivery and redesigning infrastructure for scalability.
  • Ensure data integrity, quality, and security.
  • Work with stakeholders to assist with data-related technical issues and support their data needs.
  • Manage data separation and security across multiple data sources.

AWSPythonSQLBusiness IntelligenceMachine LearningData engineeringData StructuresSparkCommunication SkillsData modeling

Posted 2 months ago
Apply
Apply

πŸ“ Central EU or Americas

🧭 Full-Time

πŸ” Real estate investment

🏒 Company: RoofstockπŸ‘₯ 501-1000πŸ’° $240,000,000 Series E almost 3 years agoπŸ«‚ Last layoff almost 2 years agoRental PropertyPropTechMarketplaceReal EstateFinTech

  • BS or MS in a technical field: computer science, engineering or similar.
  • 8+ years technical experience working with data.
  • 5+ years strong experience building scalable data services and applications using SQL, Python, Java/Kotlin.
  • Deep understanding of microservices architecture and RESTful API development.
  • Experience with AWS services including messaging and familiarity with real-time data processing frameworks.
  • Significant experience building and deploying data-related infrastructure and robust data pipelines.
  • Strong understanding of data architecture and related challenges.
  • Experience with complex problems and distributed systems focusing on scalability and performance.
  • Strong communication and interpersonal skills.
  • Independent worker able to collaborate with cross-functional teams.

  • Improve and maintain the data services platform.
  • Deliver high-quality data services promptly, ensuring data governance and integrity while meeting objectives and maintaining SLAs.
  • Develop effective architectures and produce key code components contributing to technical solutions.
  • Integrate a diverse network of third-party tools into a cohesive, scalable platform.
  • Continuously enhance system performance and reliability by diagnosing and resolving operational issues.
  • Ensure rigorous testing of the team's work through automated methods.
  • Support data infrastructure and collaborate with the data team on scalable data pipelines.
  • Work within an Agile/Scrum framework with cross-functional teams to deliver value.
  • Influence the enterprise data platform architecture and standards.

AWSDockerPythonSQLAgileETLSCRUMSnowflakeAirflowData engineeringgRPCRESTful APIsMicroservices

Posted 5 months ago
Apply