Apply

Data Engineer

Posted about 19 hours agoViewed

View full description

💎 Seniority level: Middle, 3+ years

📍 Location: United States

💸 Salary: 135000.0 - 150000.0 USD per year

🔍 Industry: Fintech

🏢 Company: Branch👥 251-500💰 $300,000,000 Series F about 3 years ago🫂 Last layoff 7 months agoMobile AdvertisingApp MarketingMobile AppsSoftware

🗣️ Languages: English

⏳ Experience: 3+ years

🪄 Skills: AWSPythonSQLApache AirflowETLGCPKubernetesSnowflakeAirflowAPI testingData engineeringREST APIData modelingData analytics

Requirements:
  • 3+ years of experience as a Data Engineer
  • 3+ years of using Apache Airflow in a production setting.
  • Expert in Python and SQL
  • Hands-on experience with cloud platforms such as Google Cloud (GCP) or AWS.
  • Familiarity with our modern data stack: Snowflake, Airflow, Apache Beam, DBT, Looker, Secoda, Fivetran, hightouch
Responsibilities:
  • Develop, maintain, and optimize data pipelines to support analytics, reporting, and business intelligence needs.
  • Design and implement data integrations across multiple backend business systems, ensuring efficient and reliable data flows.
  • Work on streaming data processing using frameworks like Apache Beam to handle real-time data ingestion and transformation.
  • Build and maintain APIs to enable seamless data access and sharing across internal teams and platforms.
  • Manage orchestration and automation of workflows using Airflow on Kubernetes.
  • Ensure data quality, security, and compliance by implementing best practices for monitoring, testing, and governance.
  • Collaborate with analytics and engineering teams to optimize data models and improve system performance.
Apply

Related Jobs

Apply
🔥 Senior Data Engineer - Analytics
Posted about 22 hours ago

📍 United States

💸 167249.0 - 216000.0 USD per year

🔍 Healthcare

  • 5 - 8 years of experience in data engineering, analytics engineering, or a related field.
  • Bachelor’s or Master’s degree in Computer Science, Data Science, Information Systems, or a related field, with strong coursework in databases, data structures, and system design, or equivalent industry experience.
  • Strong proficiency in SQL and Python and experience working with cloud data warehouses (e.g., BigQuery), model analytics pipeline and orchestration tools (e.g., dbt, Dagster, Airflow), and self-service analytics tools (e.g., Looker).
  • A solid understanding of analytics database concepts, ELT pipelines, and best practices in data modeling.
  • Ability to work cross-functionally with stakeholders to gather requirements and deliver impactful solutions.
  • Strong problem-solving skills and a passion for building scalable data solutions.
  • [Nice to have] Experience in the healthcare industry or otherwise handling sensitive data.
  • Design, develop, and maintain ELT data pipelines to ensure reliable and efficient data processing. Our tools include dbt, Dagster, and GCP.
  • Build and optimize data models to support analytics and reporting needs.
  • Collaborate with analysts and business stakeholders to create and maintain self-service analytics tools that provide meaningful insights. Our primary tools are Looker and Amplitude.
  • Ensure data quality and integrity through testing, validation, and documentation.
  • Monitor and improve analytics database performance, optimizing queries and warehouse costs. Our analytics database lives on BigQuery.
  • Automate and improve our data pipeline workflows for scalability and efficiency.
  • Work closely with product, engineering, and business teams to understand data requirements and translate them into effective solutions.

PythonSQLApache AirflowData AnalysisETLGCPAmplitude AnalyticsData engineeringData StructuresCommunication SkillsAnalytical SkillsProblem SolvingData modeling

Posted about 22 hours ago
Apply
Apply

📍 USA

🧭 Full-Time

🔍 Digital Media Agency

🏢 Company: Spring & Bond👥 11-50Digital MarketingAdvertisingDigital MediaConsulting

  • 1+ years of experience in a data engineering or related role.
  • Strong SQL skills, including the ability to write complex queries using JOINs and aggregate functions.
  • Proficiency in Python and data manipulation libraries such as Pandas.
  • Experience with data validation techniques and tools.
  • Familiarity with AWS cloud services, particularly S3 and Lambda.
  • Experience with Git for code version control.
  • Detail-oriented with a focus on data accuracy and quality.
  • Organized with a systematic approach to managing data workflows.
  • Comfortable working in an ambiguous environment and able to independently drive projects forward.
  • Nimble and able to adapt to changing priorities.
  • Design, develop, and maintain ETL pipelines to ingest and transform data from various sources.
  • Implement data validation processes to ensure data accuracy and consistency throughout the data lifecycle, using tools like Regex.
  • Write and optimize SQL queries for data extraction, aggregation, and analysis.
  • Develop and maintain Python scripts and Pandas dataframes for data manipulation and analysis.
  • Utilize AWS Lambda functions to automate data processing tasks.
  • Manage code using Git for version control and collaborative development.
  • Collaborate with data analysts, media strategists, and other stakeholders to understand their data requirements and provide solutions.
  • Communicate technical concepts to non-technical stakeholders and translate business needs into technical specifications.
  • Troubleshoot and resolve data-related issues, identifying areas for improvement and efficiency gains.
  • Document data processes, pipelines, and transformations for knowledge sharing and maintainability.
  • Work with vendors to ensure seamless data integration and resolve any data delivery issues.
  • Apply critical thinking skills to analyze complex data problems and develop innovative solutions.

AWSPythonSQLData AnalysisETLGitData engineeringPandas

Posted 1 day ago
Apply
Apply

📍 United States

🧭 Contract

💸 91520.0 - 120000.0 USD per year

🏢 Company: Third Eye Software👥 11-50ConsultingInformation TechnologyRecruitingSoftware

  • 3+ years of experience in data engineering
  • Advanced expertise in SQL
  • Experience with Google Cloud Platform (GCP)
  • Hands-on experience with ETL/ELT processes and data pipeline development
  • Proficiency in Python
NOT STATED

PythonSQLETLGCPJenkinsNumpyData engineeringPandas

Posted 1 day ago
Apply
Apply

📍 United States

💸 64000.0 - 120000.0 USD per year

  • Strong PL/SQL, SQL development skills
  • Proficient in multiple languages used in data engineering such as Python, Java
  • Minimum 3-5 years of experience in Data engineering working with Oracle and MS SQL
  • Experience with data warehousing concepts and technologies including cloud-based services (e.g. Snowflake)
  • Experience with cloud platforms like Azure and knowledge of infrastructure
  • Experience with data orchestration tools (e.g. Azure Data Factory, DataBricks workflows)
  • Understanding of data privacy regulations and best practices
  • Experience working with remote teams
  • Experience working on a team with a CI/CD process
  • Familiarity using tools like Git, Jira
  • Bachelor's degree in Computer Science or Computer Engineering
  • Design, implement and maintain scalable pipelines and architecture to collect, process, and store data from various sources.
  • Unit test and document solutions that meet product quality standards prior to release to QA.
  • Identify and resolve performance bottlenecks in pipelines due to data, queries and processing workflows to ensure efficient and timely data delivery.
  • Implement data quality checks and validations processes to ensure accuracy, completeness and consistency of data delivery.
  • Work with Data Architect and implement best practices for data governance, quality and security.
  • Collaborate with cross-functional teams to identify and address data needs.
  • Ensure technology solutions support the needs of the customer and/or organization.
  • Define and document technical requirements.

PythonSQLETLGitJavaOracleSnowflakeAzureData engineeringCI/CDRESTful APIs

Posted 3 days ago
Apply
Apply
🔥 BI Data Engineer
Posted 3 days ago

📍 AZ, CA, CO, CT, FL, GA, IL, MA, NV, NJ, NM, NY, OH, OR, PA, TX, VA, WA

🧭 Full-Time

💸 105000.0 - 120000.0 USD per year

🔍 Software Development

🏢 Company: Committee for Children

  • 5+ years’ experience working with relational database systems
  • 5+ years’ experience performing business and financial analysis
  • 3+ years’ experience working with Power BI to develop reports and dashboards
  • Advanced proficiency in Power BI (Power Query M, DAX), Power Automate, Excel, SQL, and Microsoft Fabric Analytics
  • Experience with ETL processes, data warehousing, subscription business models, and SaaS KPIs
  • Experience with ERP systems (NetSuite preferred)
  • Experience with different data warehouse designs
  • Ability to prioritize tasks and manage work efficiently maintaining a high level of productivity
  • Demonstrated ability to navigate and articulate the workflow between data warehousing, data transformation, and reporting tools to ensure accuracy and relevance of insights generated
  • Experience working independently and in a team-oriented, collaborative environment
  • Strong critical thinking, analytical, and problem-solving skills
  • Sound decision making, discretion, and confidentiality
  • Develop and maintain datasets, data models, and data visualizations to support business decisions
  • Develop different data warehouse designs like star schema, snowflake schema, and dimensional modeling
  • Identify data sources, definitions, and timelines appropriate for analysis
  • Write, optimize and maintain complex SQL queries to support data analysis and reporting needs
  • Develop and generate ad hoc reports based on stakeholder requirements to support decision making processes in various departments
  • Work with Fabric to connect to various data sources such as databases, cloud storage, or APIs
  • Integrate data warehouse with business intelligence tools to create reports and dashboards
  • Design and build interactive reports using Power BI to present findings and identify trends to stakeholders
  • Ensure data quality and integrity by identifying and resolving data issues
  • Perform root cause analysis and uncover core issues using data, then assist the organization to improve
  • Analyze and interpret various sources of internal data and external data sources to support business decision-making
  • Design, build and maintain automated workflows using Power Automate to streamline business processes
  • Identify opportunities for process improvement and develop solutions to reduce manual effort

SQLBusiness IntelligenceData AnalysisETLMicrosoft SQL ServerData engineeringCommunication SkillsAnalytical SkillsCollaborationMicrosoft ExcelProblem SolvingRESTful APIsCritical thinkingReportingTroubleshootingJSONData visualizationFinancial analysisData modelingScriptingData analyticsData managementSaaS

Posted 3 days ago
Apply
Apply
🔥 Data Engineer
Posted 3 days ago

📍 United States

🧭 Full-Time

🔍 Sustainable Agriculture

🏢 Company: Agrovision

  • Experience with RDBMS (e.g., Teradata, MS SQL Server, Oracle) in production environments is preferred
  • Hands-on experience in data engineering and databases/data warehouses
  • Familiarity with Big Data platforms (e.g., Hadoop, Spark, Hive, HBase, Map/Reduce)
  • Expert level understanding of Python (e.g., Pandas)
  • Proficient in shell scripting (e.g., Bash) and Python data application development (or similar)
  • Excellent collaboration and communication skills with teams
  • Strong analytical and problem-solving skills, essential for tackling complex challenges
  • Experience working with BI teams and tooling (e.g. PowerBI), supporting analytics work and interfacing with Data Scientists
  • Collaborate with data scientists to ensure high-quality, accessible data for analytical and predictive modeling
  • Design and implement data pipelines (ETL’s) tailored to meet business needs and digital/analytics solutions
  • Enhance data integrity, security, quality, and automation, addressing system gaps proactively
  • Support pipeline maintenance, troubleshoot issues, and optimize performance
  • Lead and contribute to defining detailed scalable data models for our global operations
  • Ensure data security standards are met and upheld by contributors, partners and regional teams through programmatic solutions and tooling

PythonSQLApache HadoopBashETLData engineeringData scienceRDBMSPandasSparkCommunication SkillsAnalytical SkillsCollaborationProblem SolvingData modeling

Posted 3 days ago
Apply
Apply

📍 United States, Latin America, India

🔍 Software Development

🏢 Company: phData👥 501-1000💰 $2,499,997 Seed about 7 years agoInformation ServicesAnalyticsInformation Technology

  • 4+ years as a hands-on Data Engineer and/or Software Engineer
  • Experience with software development life cycle, including unit and integration testing
  • Programming expertise in Java, Python and/or Scala
  • Experience with core cloud data platforms including Snowflake, AWS, Azure, Databricks and GCP
  • Experience using SQL and the ability to write, debug, and optimize SQL queries
  • Client-facing written and verbal communication skills
  • Design and implement data solutions
  • Help ensure performance, security, scalability, and robust data integration
  • Develop end-to-end technical solutions into production
  • Multitask, prioritize, and work across multiple projects at once
  • Create and deliver detailed presentations
  • Detailed solution documentation (e.g. including POCS and roadmaps, sequence diagrams, class hierarchies, logical system views, etc.)

AWSPythonSoftware DevelopmentSQLCloud ComputingData AnalysisETLGCPJavaKafkaSnowflakeAzureData engineeringSparkCommunication SkillsCI/CDProblem SolvingAgile methodologiesRESTful APIsDocumentationScalaData modeling

Posted 4 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 4 days ago

📍 United States

💸 144000.0 - 180000.0 USD per year

🔍 Software Development

🏢 Company: Hungryroot👥 101-250💰 $40,000,000 Series C almost 4 years agoArtificial Intelligence (AI)Food and BeverageE-CommerceRetailConsumer GoodsSoftware

  • 5+ years of experience in ETL development and data modeling
  • 5+ years of experience in both Scala and Python
  • 5+ years of experience in Spark
  • Excellent problem-solving skills and the ability to translate business problems into practical solutions
  • 2+ years of experience working with the Databricks Platform
  • Develop pipelines in Spark (Python + Scala) in the Databricks Platform
  • Build cross-functional working relationships with business partners in Food Analytics, Operations, Marketing, and Web/App Development teams to power pipeline development for the business
  • Ensure system reliability and performance
  • Deploy and maintain data pipelines in production
  • Set an example of code quality, data quality, and best practices
  • Work with Analysts and Data Engineers to enable high quality self-service analytics for all of Hungryroot
  • Investigate datasets to answer business questions, ensuring data quality and business assumptions are understood before deploying a pipeline

AWSPythonSQLApache AirflowData MiningETLSnowflakeAlgorithmsAmazon Web ServicesData engineeringData StructuresSparkCI/CDRESTful APIsMicroservicesJSONScalaData visualizationData modelingData analyticsData management

Posted 4 days ago
Apply
Apply

📍 United States

💸 135000.0 - 155000.0 USD per year

🔍 Software Development

🏢 Company: Jobgether👥 11-50💰 $1,493,585 Seed about 2 years agoInternet

  • 8+ years of experience as a data engineer, with a strong background in data lake systems and cloud technologies.
  • 4+ years of hands-on experience with AWS technologies, including S3, Redshift, EMR, Kafka, and Spark.
  • Proficient in Python or Node.js for developing data pipelines and creating ETLs.
  • Strong experience with data integration and frameworks like Informatica and Python/Scala.
  • Expertise in creating and managing AWS services (EC2, S3, Lambda, etc.) in a production environment.
  • Solid understanding of Agile methodologies and software development practices.
  • Strong analytical and communication skills, with the ability to influence both IT and business teams.
  • Design and develop scalable data pipelines that integrate enterprise systems and third-party data sources.
  • Build and maintain data infrastructure to ensure speed, accuracy, and uptime.
  • Collaborate with data science teams to build feature engineering pipelines and support machine learning initiatives.
  • Work with AWS cloud technologies like S3, Redshift, and Spark to create a world-class data mesh environment.
  • Ensure proper data governance and implement data quality checks and lineage at every stage of the pipeline.
  • Develop and maintain ETL processes using AWS Glue, Lambda, and other AWS services.
  • Integrate third-party data sources and APIs into the data ecosystem.

AWSNode.jsPythonSQLETLKafkaData engineeringSparkAgile methodologiesScalaData modelingData management

Posted 4 days ago
Apply
Apply

📍 United States, Latin America, India

🧭 Full-Time

🔍 Software Development

  • 4+ years as a hands-on Data Engineer and/or Software Engineer designing and implementing data solutions
  • Programming expertise in Java, Python and/or Scala
  • Experience with software development life cycle, including unit and integration testing
  • Experience with core cloud data platforms including Snowflake, AWS, Azure, Databricks and GCP
  • Experience using SQL and the ability to write, debug, and optimize SQL queries
  • Client-facing written and verbal communication skills and experience
  • Design and implement data solutions
  • Help ensure performance, security, scalability, and robust data integration
  • Multitask, prioritize, and work across multiple projects at once
  • Create and deliver detailed presentations
  • Detailed solution documentation (e.g. including POCS and roadmaps, sequence diagrams, class hierarchies, logical system views, etc.)

AWSPythonSoftware DevelopmentSQLCloud ComputingETLGCPJavaSnowflakeAzureData engineeringSparkCommunication SkillsScalaData modeling

Posted 4 days ago
Apply