Apply

Senior Data Engineer

Posted 5 days agoViewed

View full description

πŸ’Ž Seniority level: Senior, 6+ years

πŸ“ Location: Costa Rica, Brazil, Argentina, Chile, Mexico

πŸ” Industry: Insider Risk Management

🏒 Company: TeramindπŸ‘₯ 51-100Productivity ToolsSecurityCyber SecurityEnterprise SoftwareSoftware

πŸ—£οΈ Languages: English

⏳ Experience: 6+ years

πŸͺ„ Skills: PythonSQLApache HadoopETLMachine LearningAzureData engineeringNosqlComplianceScalaData visualizationData modelingData management

Requirements:
  • 6+ years of experience in data engineering, with a proven track record of successfully delivering data-driven solutions.
  • Strong expertise in designing and building scalable data pipelines using industry-standard tools and frameworks.
  • Experience with big data technologies and distributed systems, such as Hadoop, Spark, or similar frameworks.
  • Proficient programming skills in languages such as Python, Java, or Scala, alongside a solid understanding of database management systems (SQL and NoSQL).
  • Understanding of data requirements for machine learning applications and how to optimize data for model training.
  • Experience with security data processing and compliance standards is preferred, ensuring that data handling meets industry regulations and best practices.
Responsibilities:
  • Design and implement robust data architecture tailored for AI-driven features, ensuring it meets the evolving needs of our platform.
  • Build and maintain efficient data pipelines for processing user activity data, ensuring data flows seamlessly throughout our systems.
  • Develop comprehensive systems for data storage, retrieval, and processing, facilitating quick and reliable access to information.
  • Ensure high standards of data quality and availability, enabling machine learning models to produce accurate and actionable insights.
  • Enhance the performance and scalability of our data infrastructure to accommodate growing data demands and user activity.
  • Work closely with data scientists and machine learning engineers to understand their data requirements and ensure data solutions are tailored to their needs.
Apply

Related Jobs

Apply

πŸ“ Worldwide

πŸ” Hospitality

🏒 Company: Lighthouse

  • 4+ years of professional experience using Python, Java, or Scala for data processing (Python preferred)
  • You stay up-to-date with industry trends, emerging technologies, and best practices in data engineering.
  • Improve, manage, and teach standards for code maintainability and performance in code submitted and reviewed
  • Ship large features independently, generate architecture recommendations and have the ability to implement them
  • Great communication: Regularly achieve consensus amongst teams
  • Familiarity with GCP, Kubernetes (GKE preferred),Β  CI/CD tools (Gitlab CI preferred), familiarity with the concept of Lambda Architecture.
  • Experience with Apache Beam or Apache Spark for distributed data processing or event sourcing technologies like Apache Kafka.
  • Familiarity with monitoring tools like Grafana & Prometheus.
  • Design and develop scalable, reliable data pipelines using the Google Cloud stack.
  • Optimise data pipelines for performance and scalability.
  • Implement and maintain data governance frameworks, ensuring data accuracy, consistency, and compliance.
  • Monitor and troubleshoot data pipeline issues, implementing proactive measures for reliability and performance.
  • Collaborate with the DevOps team to automate deployments and improve developer experience on the data front.
  • Work with data science and analytics teams to enable them to bring their research to production grade data solutions, using technologies like airflow, dbt or MLflow (but not limited to)
  • As a part of a platform team, you will communicate effectively with teams across the entire engineering organisation, to provide them with reliable foundational data models and data tools.
  • Mentor and provide technical guidance to other engineers working with data.

PythonSQLApache AirflowETLGCPKubernetesApache KafkaData engineeringCI/CDMentoringTerraformScalaData modeling

Posted 2 days ago
Apply
Apply

πŸ“ Worldwide

🧭 Full-Time

NOT STATED
  • Own the design and implementation of cross-domain data models that support key business metrics and use cases.
  • Partner with analysts and data engineers to translate business logic into performant, well-documented dbt models.
  • Champion best practices in testing, documentation, CI/CD, and version control, and guide others in applying them.
  • Act as a technical mentor to other analytics engineers, supporting their development and reviewing their code.
  • Collaborate with central data platform and embedded teams to improve data quality, metric consistency, and lineage tracking.
  • Drive alignment on model architecture across domainsβ€”ensuring models are reusable, auditable, and trusted.
  • Identify and lead initiatives to reduce technical debt and modernise legacy reporting pipelines.
  • Contribute to the long-term vision of analytics engineering at Pleo and help shape our roadmap for scalability and impact.

SQLData AnalysisETLData engineeringCI/CDMentoringDocumentationData visualizationData modelingData analyticsData management

Posted 3 days ago
Apply
Apply

πŸ“ Brazil

🧭 Full-Time

πŸ” Software Development

🏒 Company: TELUS Digital Brazil

  • At least 3 years of experience as Data Engineer
  • Have actively participated in the design and development of data architectures
  • Hands-on experience in developing and optimizing data pipelines
  • Experience working with databases and data modeling projects, as well as practical experience utilizing SQL
  • Effective English communication - able to explain technical and non-technical concepts to different audiences
  • Experience with a general-purpose programming language such as Python or Scala
  • Ability to work well in teams and interact effectively with others
  • Ability to work independently and manage multiple tasks simultaneously while meeting deadlines
  • Develop and optimize scalable, high-performing, secure, and reliable data pipelines that address diverse business needs and considerations
  • Identify opportunities to enhance internal processes, implement automation to streamline manual tasks, and contribute to infrastructure redesign
  • Act as a guide and mentor to junior engineers, supporting their professional growth and fostering an inclusive working environment
  • Collaborate with cross-functional teams to ensure data quality and support data-driven decision-making to strive for greater functionality in our data systems
  • Collaborate with project managers and product owners to assist in prioritizing, estimating, and planning development tasks
  • Provide constructive feedback, and share expertise with fellow team members, fostering mutual growth and learning
  • Engage in ongoing research and adoption of new technologies, libraries, frameworks, and best practices to enhance the capabilities of the data team
  • Demonstrate a commitment to accessibility and ensure that your work considers and positively impacts others

AWSDockerPythonSQLAgileApache AirflowCloud ComputingETLKubernetesData engineeringData scienceCommunication SkillsAnalytical SkillsTeamworkData modelingEnglish communication

Posted 24 days ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 1 month ago

πŸ“ Worldwide

🧭 Full-Time

πŸ’Έ 167471.0 USD per year

πŸ” Software Development

🏒 Company: Float.com

  • Expertise in ML, expert systems, and advanced algorithms (e.g., pattern matching, optimization) with applied experience in Scheduling, Recommendations, or Personalization.
  • Proficient in Python or Java and comfortable with SQL and Javascript/Typescript.
  • Experience with large-scale data pipelines and stream processing (e.g., Kafka, Debezium, Flink).
  • Skilled in data integration, cleaning, and validation.
  • Familiar with vector and graph databases (e.g., Neo4j).
  • Lead technical viability discussions:
  • Develop and test proof-of-concepts for this project.
  • Analyse existing data:
  • Evaluate our data streaming pipeline: Y
  • Lead technical discussions related to optimization, pattern detection, and AI, serving as the primary point of contact for these areas within Float.
  • Develop and implement advanced algorithms to enhance the Resource Recommendation Engine and other product features, initially focused on pattern detection and optimization.
  • Design, implement, and maintain our streaming data architecture to support real-time data processing and analytics, ensuring data integrity and reliability.
  • Establish best practices and standards for optimization, AI, and data engineering development within the organization.
  • Mentor and train team members on optimization, AI, and data engineering concepts and techniques, fostering a culture of continuous learning and innovation.
  • Stay updated with the latest trends and related technologies, and proactively identify opportunities to incorporate them into Float's solutions.

PythonSQLKafkaMachine LearningAlgorithmsData engineering

Posted about 1 month ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 1 month ago

πŸ“ Europe, APAC, Americas

🧭 Full-Time

πŸ” Software Development

🏒 Company: DockerπŸ‘₯ 251-500πŸ’° $105,000,000 Series C about 3 years agoDeveloper ToolsDeveloper PlatformInformation TechnologySoftware

  • 4+ years of relevant industry experience
  • Experience with data modeling and building scalable pipelines
  • Proficiency with Snowflake or BigQuery
  • Experience with data governance and security controls
  • Experience creating ETL scripts using Python and SQL
  • Familiarity with a cloud ecosystem: AWS/Azure/Google Cloud
  • Experience with Tableau or Looker
  • Manage and develop ETL jobs, warehouse, and event collection tools
  • Build and manage the Central Data Model for reporting
  • Integrate emerging methodologies and technologies
  • Build data pipelines for ML and AI projects
  • Contribute to SOC2 compliance across the data platform
  • Document technical architecture

PythonSQLETLSnowflakeAirflowData engineeringData visualizationData modeling

Posted about 1 month ago
Apply
Apply

πŸ“ Worldwide

πŸ” Event Technology

NOT STATED
NOT STATED

AWSDockerPostgreSQLPythonSQLApache AirflowCloud ComputingData AnalysisETLKubernetesAlgorithmsApache KafkaData engineeringData StructuresCI/CDRESTful APIsMicroservicesData visualizationData modeling

Posted 2 months ago
Apply
Apply

πŸ“ Argentina

🧭 Full-Time

πŸ” Nonprofit fundraising technology

🏒 Company: GoFundMeπŸ‘₯ 251-500πŸ’° Series A over 9 years agoπŸ«‚ Last layoff over 2 years agoInternetCrowdfundingPeer to Peer

  • 5+ years as a data engineer crafting, developing and maintaining business data warehouse alternatives consisting of structured and unstructured data.
  • Proficiency with building and orchestrating data pipelines using ETL/data preparation tools.
  • Expertise in orchestration tools like Airflow or Prefect.
  • Proficiency in connecting data through web APIs.
  • Proficiency in writing and optimizing SQL queries.
  • Solid knowledge of Python and other programming languages.
  • Experience with Snowflake is required.
  • Good understanding of database architecture and best practices.
  • Develop and maintain enterprise data warehouse (Snowflake).
  • Develop and orchestrate ELT data pipelines, sourcing data from databases and web APIs.
  • Integrate data from warehouse into third-party tools for actionable insights.
  • Develop and sustain REST API endpoints for data science products.
  • Provide ongoing maintenance and improvements to existing data solutions.
  • Monitor and optimize Snowflake usage for performance and cost-effectiveness.

AWSPythonSQLAWS EKSETLJavaKubernetesSnowflakeC++AirflowREST APICollaborationTerraform

Posted 4 months ago
Apply
Apply

πŸ“ Mexico

πŸ” ECommerce

🏒 Company: GluoπŸ‘₯ 101-250Digital MarketingAppsInformation Technology

  • 5+ years of experience in data engineering with a focus on modern data ecosystems.
  • Expertise in cloud platforms such as Google Cloud, AWS, or Azure.
  • Hands-on experience with ETL processes, real-time data pipelines, and tools like Apache Kafka.
  • At least one relevant certification is required (e.g., AWS Certified Data Analytics).
  • Proficiency in JavaScript, Python, and Java.
  • Deep knowledge of data modeling, architecture, and performance optimization.
  • Familiarity with AI/ML use cases.
  • Exceptional communication and problem-solving skills.
  • Capacity to collaborate with diverse teams and align technical execution with business priorities.
  • Leadership experience or interest in mentoring.
  • Shape the Data Fabric Practice by developing accelerators, frameworks, and best practices.
  • Build Modern Data Architectures to unify access to enterprise data for analytics and decision-making.
  • Enable Real-Time Data Synchronization with event-driven architectures like Kafka.
  • Collaborate with cross-functional teams to align data solutions with strategic goals.
  • Optimize the Data Ecosystem for scalability and reliability of data pipelines.

AWSLeadershipPythonETLJavaJavascriptKafkaApache KafkaAzureData engineeringCommunication SkillsMentoringOrganizational skills

Posted 4 months ago
Apply
Apply

πŸ“ Argentina, Colombia, Costa Rica, Mexico

πŸ” Data Analytics

  • Proficient with SQL and data visualization tools (i.e. Tableau, PowerBI, Google Data Studio).
  • Programming skills mainly SQL.
  • Knowledge and experience with Python and/or R a plus.
  • Experience with Alteryx a plus.
  • Experience working with Google Cloud and AWS a plus.
  • Analyze data and consult with subject matter experts to design and develop business rules for data processing.
  • Setup and/or maintain existing dataflows in data wrangling tools like Alteryx or Google Dataprep.
  • Create and/or maintain SQL scripts.
  • Monitor, troubleshoot, and remediate data quality across marketing data systems.
  • Design and execute data quality checks.
  • Maintain ongoing management and stewardship of data governance, processing, and reporting.
  • Govern taxonomy additions, application, and use.
  • Serve as a knowledge expert for operational processes and identify improvement areas.
  • Evaluate opportunities for simplification and/or automation for reporting and processes.

AWSPythonSQLData AnalysisGCPMicrosoft Power BITableauAmazon Web ServicesData engineering

Posted 5 months ago
Apply
Apply

πŸ“ USA, CAN, MEX

🧭 Full-Time

πŸ” Transportation technology

🏒 Company: Fleetio

  • 5+ years experience working in a data engineering or data-focused software engineering role.
  • Experience transforming raw data into clean models using standard tools of the modern data stack.
  • Deep understanding of ELT and data modeling concepts.
  • Experience with streaming data and pipelines (Kafka or Kinesis).
  • Proficiency in Python with a proven track record of delivering production-ready Python applications.
  • Experience in designing, building, and administering modern data pipelines and data warehouses.
  • Experience with dbt.
  • Familiarity with semantic layers like Cube or MetricFlow.
  • Experience with Snowflake, BigQuery, or Redshift.
  • Knowledge of version control tools such as GitHub or GitLab.
  • Experience with ELT tools like Stitch or Fivetran.
  • Experience with orchestration tools such as Prefect or Dagster.
  • Knowledge of CI/CD and IaaC tooling such as GitHub Actions and Terraform.
  • Experience with business intelligence solutions (Metabase, Looker, Tableau, Periscope, Mode).
  • Familiarity with serverless cloud functions (AWS Lambda, Google Cloud Functions, etc.).
  • Excellent communication and project management skills with a customer service-focused mindset.
  • Enable and scale self-serve analytics for all Fleetio team members by modeling data and metrics via tools like dbt.
  • Develop data destinations, custom integrations, and maintain open source packages for customer data integration.
  • Maintain and develop custom data pipelines from operational source systems for both streaming and batch sources.
  • Work on the development of internal data infrastructure, improving data hygiene and integrity through ELT pipeline monitoring.
  • Architect, design, and implement core components of data platform including data observability and data science products.
  • Develop and maintain streaming data pipelines from various databases and sources.
  • Collaborate across the company to tailor data needs and ensure data is appropriately modeled and available.
  • Document best practices and coach others on data modeling and SQL query optimization, managing roles, permissions, and deprecated projects.

AWSProject ManagementPythonSQLBusiness IntelligenceDesign PatternsKafkaSnowflakeTableauData engineeringServerlessCommunication SkillsCI/CD

Posted 6 months ago
Apply