Apply

Senior Data Engineer

Posted about 1 month agoViewed

View full description

πŸ’Ž Seniority level: Senior, 3-5+ years

πŸ“ Location: Brazil, Argentina, US Eastern Time, GMT-5

πŸ” Industry: Manufacturing services

🏒 Company: XometryπŸ‘₯ 501-1000πŸ’° $75,000,000 Series E over 4 years agoArtificial Intelligence (AI)3D PrintingIndustrial EngineeringSoftware

πŸ—£οΈ Languages: English

⏳ Experience: 3-5+ years

πŸͺ„ Skills: PythonSQLBusiness IntelligenceETLJavascriptSnowflakeCollaborationCI/CDDevOps

Requirements:
  • Bachelor's degree required, or relevant experience.
  • 3-5+ years of prior experience as a software engineer or data engineer in a fast-paced, technical, problem-solving environment.
  • Cloud Data Warehouse experience - Snowflake.
  • Expert in SQL.
  • Expert in ETL, data modeling, and version control, dbt and GitHub preferred.
  • Data modeling best practices for transactional and analytical processing.
  • Experience with data extraction tools (Fivetran, Airbyte, etc.).
  • Experience with event tracking software (Segment, Tealium, etc).
  • Experience with programming language like Python, JavaScript.
  • Experience with Business Intelligence tools, Looker preferred.
  • Ability to communicate effectively and influence others.
  • Ability to work in a fast-paced environment and shift gears quickly.
  • Must be able to work core aligned hours to US Eastern Time / GMT-5.
Responsibilities:
  • Close collaboration with other engineers and product managers to become a valued member of an autonomous, cross-functional team.
  • Build analytics models that utilize the data pipeline to provide actionable insights into key business performance metrics.
  • Develop data models for analytics and data scientist team members that assist them in building and optimizing data.
  • Maintain data pipelines and perform any changes or alterations as requested.
  • Develop deployment and release of functionality through software integration to support devops and CI/CD pipelines.
Apply

Related Jobs

Apply

πŸ“ Colombia, Spain, Ecuador, Venezuela, Argentina

πŸ” HR Tech

🏒 Company: JobgetherπŸ‘₯ 11-50πŸ’° $1,493,585 Seed almost 2 years agoInternet

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field.
  • Minimum of 4 years of experience in data engineering.
  • Proficiency in Python with at least 4 years of experience.
  • Hands-on experience with big data technologies like Hadoop, Spark, or Kafka.
  • Proficiency in relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases like MongoDB.
  • Experience with AWS cloud platforms.
  • Strong understanding of data modeling, schema design, and data warehousing concepts.
  • Excellent analytical and troubleshooting skills.
  • Fluency in English and Spanish.

  • Design, build, and maintain scalable data pipelines and ETL processes for large volumes of data.
  • Develop and optimize data scraping solutions for efficient job data extraction.
  • Collaborate with data scientists on AI-driven matching algorithms.
  • Ensure data integrity and reliability through validation mechanisms.
  • Analyze and optimize system performance, addressing challenges.
  • Work with teams to efficiently deploy machine learning models.
  • Stay updated on emerging technologies and best practices.
  • Collaborate with various teams to understand and implement data requirements.
  • Maintain documentation for systems and processes, ensuring compliance.

AWSDockerPostgreSQLPythonETLHadoopKafkaKubernetesMachine LearningMongoDBMySQLSparkData modeling

Posted 8 days ago
Apply
Apply

πŸ“ Brazil

🧭 Full-Time

πŸ” Government affairs technology

  • Bachelor's or Master's degree in Computer Science, Engineering, or related field.
  • 5+ years in data engineering with a proven track record in developing data products.
  • Expertise in AWS services (EC2, EMR, RDS, Redshift) and building data architectures.
  • Proficient in big data tools (Hadoop, Spark, Kafka) and machine learning frameworks (TensorFlow, PyTorch).
  • 3+ years experience with Python and deep knowledge of SQL and NoSQL databases.

  • Architect and implement highly scalable advanced Retrieval-Augmented Generation (RAG) data pipelines.
  • Design robust data pipelines for real-time processing and analysis of extensive datasets.
  • Lead cloud-based deployments in AWS with a focus on performance, security, and cost-efficiency.
  • Innovate data architecture to adapt to the dynamic needs of Quorum Copilot.

AWSPythonSQLApache AirflowETLHadoopKafkaMachine LearningPyTorchNosqlSparkTensorflow

Posted 9 days ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 1 month ago

πŸ“ Argentina

🧭 Full-Time

πŸ” Nonprofit fundraising technology

🏒 Company: GoFundMeπŸ‘₯ 251-500πŸ’° Series A over 9 years agoπŸ«‚ Last layoff about 2 years agoInternetCrowdfundingPeer to Peer

  • 5+ years as a data engineer crafting, developing and maintaining business data warehouse alternatives consisting of structured and unstructured data.
  • Proficiency with building and orchestrating data pipelines using ETL/data preparation tools.
  • Expertise in orchestration tools like Airflow or Prefect.
  • Proficiency in connecting data through web APIs.
  • Proficiency in writing and optimizing SQL queries.
  • Solid knowledge of Python and other programming languages.
  • Experience with Snowflake is required.
  • Good understanding of database architecture and best practices.

  • Develop and maintain enterprise data warehouse (Snowflake).
  • Develop and orchestrate ELT data pipelines, sourcing data from databases and web APIs.
  • Integrate data from warehouse into third-party tools for actionable insights.
  • Develop and sustain REST API endpoints for data science products.
  • Provide ongoing maintenance and improvements to existing data solutions.
  • Monitor and optimize Snowflake usage for performance and cost-effectiveness.

AWSPythonSQLAWS EKSETLJavaKubernetesSnowflakeC++AirflowREST APICollaborationTerraform

Posted about 1 month ago
Apply
Apply
πŸ”₯ (830) Senior Data Engineer
Posted about 1 month ago

πŸ“ LATAM

πŸ” Staff augmentation

🏒 Company: NearsureπŸ‘₯ 501-1000Staffing AgencyOutsourcingSoftware

  • Bachelor's Degree in Computer Science, Engineering, or related field.
  • 5+ Years of experience working in Data Engineering.
  • 3+ Years of experience working with SQL.
  • 3+ Years of experience working with Python.
  • 2+ Years of experience working with Cloud.
  • 2+ Years of experience working with Microservices.
  • 2+ Years of experience working with Databases.
  • 2+ Years of experience working with CI/CD tools.
  • Advanced English Level is required.

  • Write code to test and validate business logic in SQL and code repositories.
  • Test back-end API interfaces for the financial platform.
  • Build developer productivity tools.
  • Provide risk assessments on identified defects.
  • Define quality engineering scope and maintain automated scripts.
  • Participate in Scrum meetings for status updates.
  • Identify and clarify information needs.
  • Organize and resolve current backlog issues.

PythonSQLSCRUMData engineeringCI/CDMicroservices

Posted about 1 month ago
Apply
Apply

πŸ“ Argentina, Colombia, Costa Rica, Mexico

πŸ” Data Analytics

  • Proficient with SQL and data visualization tools (i.e. Tableau, PowerBI, Google Data Studio).
  • Programming skills mainly SQL.
  • Knowledge and experience with Python and/or R a plus.
  • Experience with Alteryx a plus.
  • Experience working with Google Cloud and AWS a plus.

  • Analyze data and consult with subject matter experts to design and develop business rules for data processing.
  • Setup and/or maintain existing dataflows in data wrangling tools like Alteryx or Google Dataprep.
  • Create and/or maintain SQL scripts.
  • Monitor, troubleshoot, and remediate data quality across marketing data systems.
  • Design and execute data quality checks.
  • Maintain ongoing management and stewardship of data governance, processing, and reporting.
  • Govern taxonomy additions, application, and use.
  • Serve as a knowledge expert for operational processes and identify improvement areas.
  • Evaluate opportunities for simplification and/or automation for reporting and processes.

AWSPythonSQLData AnalysisGCPMicrosoft Power BITableauAmazon Web ServicesData engineering

Posted about 2 months ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 2 months ago

πŸ“ Mexico, Gibraltar, Colombia, USA, Brazil, Argentina

🧭 Full-Time

πŸ” FinTech

🏒 Company: Bitso

  • Proven English fluency.
  • 3+ years professional working experience with analytics, ETLs, and data systems.
  • 3+ years with SQL databases, data lake, big data, and cloud infrastructure.
  • 3+ years experience with Spark.
  • BS or Master's in Computer Science or similar.
  • Strong proficiency in SQL, Python, and AWS.
  • Strong data modeling skills.

  • Build processes required for optimal extraction, transformation, and loading of data from various sources using SQL, Python, Spark.
  • Identify, design, and implement internal process improvements while optimizing data delivery and redesigning infrastructure for scalability.
  • Ensure data integrity, quality, and security.
  • Work with stakeholders to assist with data-related technical issues and support their data needs.
  • Manage data separation and security across multiple data sources.

AWSPythonSQLBusiness IntelligenceMachine LearningData engineeringData StructuresSparkCommunication SkillsData modeling

Posted about 2 months ago
Apply
Apply

πŸ“ LATAM

πŸ” Financial Services

🏒 Company: Truelogic Software

  • 5-7 years of experience in data engineering.
  • 3-5 years of experience in software development.
  • Deep experience with Snowflake and Stored Procedures.
  • Extensive experience working with Python and modern web frameworks.
  • Experience with queueing systems such as Celery, SQS, Pub/Sub.
  • Strong expertise in Python 3, Object-Oriented Code, and Design Patterns.
  • Experience with REST APIs, Git, and writing unit tests.
  • Knowledge of databases (SQL, NoSQL), AWS, serverless environments, and Infrastructure as Code (Cloudformation & CDK).
  • Familiarity with DevOps practices (CI/CD, Automated Pipelines) and Agile methodologies.

  • You will work in a growth-oriented environment at a financial services firm focused on the U.S. mortgage market.
  • Be responsible for providing strategic guidance and solution patterns for sub products.
  • Collaborate with business analysts and stakeholders to optimize requirements.
  • Work closely with developers to review and validate key functionality and ensure successful integration with existing systems.

AWSPythonSoftware DevelopmentSQLAgileDesign PatternsDjangoFlaskGitSCRUMSnowflakeTypeScriptJiraAmazon Web ServicesData engineeringREST APIServerlessNosqlCI/CDMicroservices

Posted 3 months ago
Apply
Apply

πŸ“ LATAM

πŸ” Mortgage Lending

🏒 Company: Truelogic Software

  • 5+ years of hands-on experience in backend engineering or data integration.
  • At least 2 years of experience working with an MDM platform.
  • Experience integrating with customer preference platforms like OneTrust or PossibleNow.
  • Proficiency in Python or another relevant programming language.
  • Expertise in AWS serverless technologies including Glue, Lambda, Kinesis, SQS.
  • Strong SQL skills and experience with Snowflake or similar data warehouses.
  • Experience with data modeling, cleansing, and quality management.
  • Understanding of REST APIs and web services.
  • Attention to detail and commitment to data accuracy.

  • Become a subject matter expert on Reltio.
  • Design and maintain integrations between Reltio and other systems using AWS services.
  • Implement data quality rules and cleansing routines.
  • Collaborate with data stewards to enforce governance policies.
  • Monitor and optimize performance of integrations.
  • Work with stakeholders to gather requirements and troubleshoot issues.
  • Stay updated on trends in MDM and data integration.

AWSNode.jsPythonSQLJavascriptSnowflakeAlgorithmsREST APIServerlessCollaborationData modeling

Posted 3 months ago
Apply
Apply

πŸ“ Brazil

🧭 Contract

πŸ” Digital Learning

  • 4+ years of professional experience in data engineering.
  • Proficiency in processes used in modern data engineering.
  • Solid understanding of cloud infrastructure.
  • Hands-on experience with relevant data-related AWS services.
  • Advanced understanding of the relational data model.
  • Experience interacting with RESTful APIs and webhooks.
  • Understanding of DevOps disciplines and Git version control.
  • A commitment to writing clean, efficient, and maintainable code.
  • Familiarity with monitoring tools like CloudWatch or New Relic.
  • Familiarity with incident management tools like PagerDuty is desirable.
  • Understanding Software Engineering paradigms and lifecycle is desirable.
  • Strong SQL skills.
  • Expertise utilizing Python for data wrangling.
  • Experience using Airflow for data pipeline orchestration.
  • Experience with Spark creating big data workloads.
  • Experience working with large-scale data warehouses like Redshift.
  • Experience with Terraform composing AWS infrastructure is desirable.

  • Partner with Engineering and Product teams on high-impact initiatives.
  • Design and implement robust data pipelines using AWS services.
  • Enhance event collection, queueing, and processing mechanisms.
  • Build and refine meaningful alerts with automated corrective actions.
  • Troubleshoot and resolve issues, ensuring system reliability and stability.
  • Architect scalable data products with long-term vision and efficiency in mind.
  • Identify and resolve complex technical challenges involving system integration, infrastructure, and software bugs.

AWSPythonSQLETLGitJavaKafkaStrategyData engineeringCollaborationRESTful APIsTerraform

Posted 4 months ago
Apply
Apply

πŸ“ Central EU or Americas

🧭 Full-Time

πŸ” Real estate investment

🏒 Company: RoofstockπŸ‘₯ 501-1000πŸ’° $240,000,000 Series E almost 3 years agoπŸ«‚ Last layoff almost 2 years agoRental PropertyPropTechMarketplaceReal EstateFinTech

  • BS or MS in a technical field: computer science, engineering or similar.
  • 8+ years technical experience working with data.
  • 5+ years strong experience building scalable data services and applications using SQL, Python, Java/Kotlin.
  • Deep understanding of microservices architecture and RESTful API development.
  • Experience with AWS services including messaging and familiarity with real-time data processing frameworks.
  • Significant experience building and deploying data-related infrastructure and robust data pipelines.
  • Strong understanding of data architecture and related challenges.
  • Experience with complex problems and distributed systems focusing on scalability and performance.
  • Strong communication and interpersonal skills.
  • Independent worker able to collaborate with cross-functional teams.

  • Improve and maintain the data services platform.
  • Deliver high-quality data services promptly, ensuring data governance and integrity while meeting objectives and maintaining SLAs.
  • Develop effective architectures and produce key code components contributing to technical solutions.
  • Integrate a diverse network of third-party tools into a cohesive, scalable platform.
  • Continuously enhance system performance and reliability by diagnosing and resolving operational issues.
  • Ensure rigorous testing of the team's work through automated methods.
  • Support data infrastructure and collaborate with the data team on scalable data pipelines.
  • Work within an Agile/Scrum framework with cross-functional teams to deliver value.
  • Influence the enterprise data platform architecture and standards.

AWSDockerPythonSQLAgileETLSCRUMSnowflakeAirflowData engineeringgRPCRESTful APIsMicroservices

Posted 5 months ago
Apply