Apply

Senior Data Engineer

Posted about 2 months agoViewed

View full description

πŸ’Ž Seniority level: Senior

πŸ“ Location: Worldwide

πŸ” Industry: Event technology

πŸͺ„ Skills: AWSDockerPostgreSQLPythonSQLApache AirflowCloud ComputingData AnalysisETLKubernetesAlgorithmsApache KafkaData engineeringData StructuresCI/CDRESTful APIsMicroservicesData visualizationData modeling

Requirements:
  • Experience in data engineering and building data pipelines.
  • Proficiency in programming languages like Python, Java, or Scala.
  • Familiarity with cloud platforms and data architecture design.
Responsibilities:
  • Design and develop data solutions to enhance the functionality of the platform.
  • Implement efficient data pipelines and ETL processes.
  • Collaborate with cross-functional teams to define data requirements.
Apply

Related Jobs

Apply

πŸ“ States of SΓ£o Paulo and Rio Grande do Sul, Rio de Janeiro, Belo Horizonte

πŸ” Data Engineering

🏒 Company: TELUS Digital Brazil

  • At least 3 years of experience as Data Engineer
  • Have actively participated in the design and development of data architectures
  • Hands-on experience in developing and optimizing data pipelines
  • Experience working with databases and data modeling projects, as well as practical experience utilizing SQL
  • Effective English communication - able to explain technical and non-technical concepts to different audiences
  • Experience with a general-purpose programming language such as Python or Scala
  • Ability to work well in teams and interact effectively with others
  • Ability to work independently and manage multiple tasks simultaneously while meeting deadlines
  • Develop and optimize scalable, high-performing, secure, and reliable data pipelines that address diverse business needs and considerations
  • Identify opportunities to enhance internal processes, implement automation to streamline manual tasks, and contribute to infrastructure redesign
  • Act as a guide and mentor to junior engineers, supporting their professional growth and fostering an inclusive working environment
  • Collaborate with cross-functional teams to ensure data quality and support data-driven decision-making to strive for greater functionality in our data systems
  • Collaborate with project managers and product owners to assist in prioritizing, estimating, and planning development tasks
  • Provide constructive feedback, and share expertise with fellow team members, fostering mutual growth and learning
  • Engage in ongoing research and adoption of new technologies, libraries, frameworks, and best practices to enhance the capabilities of the data team
  • Demonstrate a commitment to accessibility and ensure that your work considers and positively impacts others

AWSDockerPythonSQLAgileApache AirflowCloud ComputingETLKubernetesData engineeringData scienceCommunication SkillsAnalytical SkillsTeamworkData modelingEnglish communication

Posted 1 day ago
Apply
Apply

πŸ“ Germany, Spain, United Kingdom, Austria

πŸ” Software Development

🏒 Company: LocalStackπŸ‘₯ 11-50πŸ’° $25,000,000 Series A 4 months agoCloud ComputingInformation TechnologySoftware

  • Ability and experience working with non technical stakeholders to gather requirements
  • Ability to define technical initiatives required to satisfy business requirements
  • Excellent knowledge of Python
  • Experience in designing real time data ingestion solutions with massive volumes of data
  • (preferred) Experience with AWS services commonly used in Data Engineering (like S3, ECS, Glue, EMR)
  • Experience with relational databases and data warehouses, data orchestration and ingestion tools, SQL, and BI tools
  • (preferred) Experience in working remotely/ in async settings
  • Experience owning initiatives at the IC level
  • Experience Providing guidance to junior engineers
  • Maintain, monitor, and optimize data ingestion pipelines for our current data platform.
  • Lead the development of our future data platform based on evolving business needs.
  • Shape the data team roadmap and contribute to long-term strategic planning.
  • Take full ownership of data ingestion from external sources, ensuring smooth functionality.
  • Design and implement a robust data modelling and data lake solution architecture.
  • Provide technical leadership and mentorship to the data engineering team.
  • Collaborate with engineering teams to define and refine ingestion pipeline requirements.
  • Work with stakeholders to gather business questions and data needs.

AWSDockerLeadershipPythonSQLApache AirflowETLKafkaData engineeringData StructuresREST APICommunication SkillsAnalytical SkillsCollaborationCI/CDProblem SolvingMentoringWritten communicationData visualizationTeam managementStakeholder managementData modeling

Posted 1 day ago
Apply
Apply

πŸ“ Canada

🧭 Full-Time

πŸ” Retail Media

🏒 Company: VantageπŸ‘₯ 1001-5000CryptocurrencyFinancial ServicesFinTechTrading Platform

  • 5+ years of experience in data engineering, big data, or distributed systems.
  • Strong expertise in Python, SQL (or equivalent big data processing frameworks).
  • Proficiency in ETL/ELT pipelines using Apache Airflow, or similar orchestration tools.
  • Experience working with real-time streaming data (Kafka, Kinesis, or Pub/Sub).
  • Strong understanding of data modelling, data warehousing, and distributed systems.
  • Familiarity with privacy-compliant data processing (GDPR, CCPA) for advertising/retail media use cases.
  • Design, develop, and optimize data pipelines, ETL/ELT workflows, and data warehouses to support large-scale retail media analytics.
  • Handle real-time and batch processing at scale
  • Work closely with data scientists, analysts, software engineers, and product teams to ensure seamless data integration and access.
  • Implement robust monitoring, validation, and security controls to maintain high data reliability.

PythonSQLApache AirflowETLKafkaData engineeringData modeling

Posted 2 days ago
Apply
Apply

πŸ“ United States

πŸ” Software Development

AWSPostgreSQLPythonSQLApache AirflowCloud ComputingData AnalysisETLApache KafkaData engineeringCI/CDRESTful APIsMicroservicesData visualizationData modelingData analyticsData management

Posted 3 days ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ’Έ 105825.0 - 136950.0 CAD per year

πŸ” Data Engineering

🏒 Company: SamsaraπŸ‘₯ 1001-5000πŸ’° Secondary Market over 4 years agoπŸ«‚ Last layoff almost 5 years agoCloud Data ServicesBusiness IntelligenceInternet of ThingsSaaSSoftware

  • BS degree in Computer Science, Statistics, Engineering, or a related quantitative discipline
  • 6+ years experience in a data engineering and data science-focused role
  • ​​Proficiency in data manipulation and processing in SQL and Python
  • Expertise building data pipelines with new API endpoints from their documentation
  • Proficiency in building ETL pipelines to handle large volumes of data
  • Demonstrated experience in designing data models at scale
  • Build and maintain highly reliable computed tables, incorporating data from various sources, including unstructured and highly sensitive data
  • Access, manipulate, and integrate external datasets with internal data
  • Building analytical and statistical models to identify patterns, anomalies, and root causes
  • Leverage SQL and Python to shape and aggregate data
  • Incorporate generative AI tools (ChatGPT Enterprise) into production data pipelines and automated workflows
  • Collaborate closely with data scientists, data analysts, and Tableau developers to ship top quality analytic products
  • Champion, role model, and embed Samsara’s cultural principles (Focus on Customer Success, Build for the Long Term, Adopt a Growth Mindset, Be Inclusive, Win as a Team) as we scale globally and across new offices

PythonSQLETLTableauAPI testingData engineeringData scienceSparkCommunication SkillsAnalytical SkillsData visualizationData modeling

Posted 4 days ago
Apply
Apply

πŸ“ India

🏒 Company: BlackStone eITπŸ‘₯ 251-500Augmented RealityRoboticsAnalyticsProject Management

  • 5+ years of experience in data engineering or a similar role.
  • Proficiency in SQL and experience with relational databases.
  • Hands-on experience with data pipeline tools and ETL processes.
  • Familiarity with big data technologies (e.g., Hadoop, Spark) is a plus.
  • Experience with cloud-based data solutions is an advantage.
  • Design, implement, and maintain scalable data architectures and pipelines.
  • Develop ETL processes to facilitate smooth data transfer and transformation from various sources.
  • Collaborate with data scientists and analysts to fulfill data needs for analytics and reporting.
  • Optimize database performance and maintain data integrity across systems.
  • Conduct data quality checks and resolve discrepancies.
  • Mentor junior data engineers and provide technical guidance.
  • Stay current with emerging technologies and best practices in data engineering.

SQLApache HadoopCloud ComputingData AnalysisETLData engineeringRDBMSSparkData visualizationData modelingData management

Posted 4 days ago
Apply
Apply

πŸ“ United States, Canada

πŸ” Software Development

🏒 Company: OverstoryπŸ‘₯ 1-10E-Commerce

  • Approximately 5 years of experience in Data Engineering with at least one experience in a startup environment
  • Product-minded and able to demonstrate significant impact you have had on a business through the application of technology
  • Proven experience of data engineering across the following (or similar) technologies: Python, data orchestration platforms (Airflow, Luigi, Dagster, etc…), data quality frameworks, data lakes/warehouses
  • Ability to design and implement scalable and resilient data systems
  • Excellent communication skills and ability to collaborate effectively in a cross-functional team environment
  • Passion for learning and staying updated with evolving technologies and industry trends
  • Owning day-to-day operational responsibilities of deliveries our analysis to the customers
  • Developing data-driven solutions to customer problems that our products aren’t solving for yet
  • Building new and improving existing technologies such as:
  • Automation of the analysis for all customers, leading to faster implementation of Overstory’s recommendations
  • Metrics to identify what are the time bottlenecks in the current flow of analysis, therefore helping all Overstory teams identify areas of improvements
  • Visualization of status and progress of the analysis for internal use
  • Working on performance & scalability of our pipelines ensuring that our tech can handle our growth

PythonSQLCloud ComputingGCPAmazon Web ServicesData engineeringCommunication SkillsAnalytical SkillsRESTful APIsData visualizationData modeling

Posted 8 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” Software Development

🏒 Company: MangomintπŸ‘₯ 51-100πŸ’° $35,000,000 Series B 6 months agoManagement Information SystemsBeautySoftware

  • 3+ years of experience in data engineering or a related role
  • Proficiency in SQL and Python for data pipelines and automation
  • Experience with dbt (or similar data modeling tools)
  • Familiarity with Snowflake (or other cloud data warehouses)
  • Knowledge of APIs and experience integrating various data sources
  • Experience with CRM and business systems (Salesforce, Outreach, Stripe, etc.)
  • Strong problem-solving skills and ability to take ownership of projects
  • Ability to work independently in a small, fast-paced startup environment
  • Effective communication skills to translate business needs into technical solutions
  • Design, develop, and maintain ETL/ELT data pipelines using Snowflake, dbt, Prefect, and other modern data tools
  • Automate data workflows to improve efficiency and reliability
  • Integrate CRM and other business systems to support cross-functional needs
  • Develop data enrichment pipelines to power our sales process
  • Build internal data tools to drive data-driven decision making
  • Work directly with stakeholders to define requirements and implement data solutions that support business objectives
  • Ensure data integrity, governance, and security best practices are upheld
  • Support analytics and reporting efforts by building dashboards and data models in dbt and Sigma

AWSPythonSQLETLSnowflakeCRMData modeling

Posted 9 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 160000.0 - 182000.0 USD per year

πŸ” Adtech

🏒 Company: tvScientificπŸ‘₯ 11-50πŸ’° $9,400,000 Convertible Note about 1 year agoInternetAdvertising

  • 7+ years of experience in data engineering.
  • Proven experience building data infrastructure using Spark with Scala.
  • Familiarity with data lakes, cloud warehouses, and storage formats.
  • Strong proficiency in AWS services.
  • Expertise in SQL for data manipulation and extraction.
  • Design and implement robust data infrastructure using Spark with Scala.
  • Collaborate with our cross-functional teams to design data solutions that meet business needs.
  • Build out our core data pipelines, store data in optimal engines and formats, and feed our machine learning models.
  • Leverage and optimize AWS resources.
  • Collaborate closely with the Data Science team.

AWSSQLCloud ComputingETLMachine LearningData engineeringSparkScalaData modeling

Posted 9 days ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ’Έ 110000.0 - 130000.0 USD per year

πŸ” Software Development

🏒 Company: CerosπŸ‘₯ 101-250πŸ’° $100,000,000 Private over 4 years agoAdvertisingContent CreatorsContent MarketingGraphic DesignSoftware

  • 5+ years of experience in data engineering, focusing on AWS Redshift and ETL pipeline development.
  • Strong expertise in SQL performance tuning, schema management, and query optimization.
  • Experience designing and maintaining ETL pipelines using AWS Glue, Matillion, or similar tools.
  • Proficiency in JavaScript/TypeScript, with experience building custom ETL workflows and integrations.
  • Hands-on experience with Python for data automation and scripting.
  • Strong understanding of data warehousing best practices, ensuring high-quality, scalable data models.
  • Experience with data monitoring and alerting tools such as AWS CloudWatch and New Relic.
  • Ability to work independently in a fast-paced environment, collaborating across teams to support data-driven initiatives.
  • Own and lead the management of AWS Redshift, ensuring optimal performance, disk usage, and cost efficiency.
  • Design and maintain scalable ETL pipelines using AWS Glue, Lambda, and Matillion to integrate data from Mixpanel, CRM platforms, and customer engagement tools.
  • Optimize SQL-based data transformations and Redshift queries to improve performance and reliability.
  • Automate data offloading and partition management, leveraging AWS services like S3 and external schemas.
  • Ensure version control and documentation of all Redshift queries, ETL processes, and AWS configurations through a centralized GitHub repository.
  • Develop monitoring and alerting for data pipelines using CloudWatch and other observability tools to ensure high availability and early issue detection.
  • Implement and maintain data quality checks and governance processes to ensure accuracy and consistency across foundational tables.
  • Collaborate with AI engineers and business stakeholders to enhance data accessibility and reporting for internal teams.
  • Maintain and optimize BI dashboards in Metabase and HubSpot, ensuring accuracy and efficiency of business reporting.
  • Manage key integrations between Redshift and external platforms, including Mixpanel, HubSpot, and Census, optimizing data accessibility and performance.
  • Administer AWS infrastructure supporting Redshift, ensuring efficient resource utilization, IAM security, and cost management.
  • Automate repetitive data tasks using Python and scripting to enhance data processes and improve team efficiency.

AWSPythonSQLETLGitJavascriptTypeScriptAmazon Web ServicesAPI testingData engineeringREST APICI/CDAnsibleData modelingData analyticsData management

Posted 9 days ago
Apply