Apply

Principal Data Engineer

Posted 12 days agoViewed

View full description

💎 Seniority level: Principal, 5+ years

📍 Location: United States

💸 Salary: 180000.0 - 200000.0 USD per year

🔍 Industry: Data Engineering

🏢 Company: InMarket👥 251-500💰 $11,500,000 Debt Financing almost 4 years agoDigital MarketingAdvertisingMobile AdvertisingMarketing

🗣️ Languages: English

⏳ Experience: 5+ years

🪄 Skills: DockerPythonSQLApache AirflowGCPHadoopKubernetesSpark

Requirements:
  • Strong SQL experience
  • Expert in a data pipelining framework (Airflow, Luigi, etc.)
  • Experience building ETL pipelines with Python, SQL, and Spark
  • Strong software engineering skills in Java or Python
  • Experience optimizing data warehouses on cloud platforms
  • Understanding of Big Data Technologies (Hadoop, Spark)
  • Knowledge of Kubernetes, Docker, and CD/CI best practices
  • B.S. or M.S. in Computer Science or a related field
Responsibilities:
  • Design and implement ETL pipelines in Apache Airflow, Big Query, Python, and Spark
  • Promote Data Engineering best practices
  • Architect and plan complex cross team projects
  • Provide technical guidance to engineers
  • Communicate analyses effectively to stakeholders
  • Identify areas for process improvement
Apply

Related Jobs

Apply

📍 United States, Canada

🧭 Full-Time

💸 150000.0 - 180000.0 USD per year

🔍 SaaS

  • 8+ years of experience in data engineering, product analytics, or SaaS.
  • Strong experience with SQL, AWS Redshift, and ETL pipelines.
  • Proficiency with BI tools (Looker, Metabase, Mixpanel).
  • Experience with AI-driven analytics and NLP techniques.
  • Strong communication and stakeholder management skills.
  • Highly organized and capable of managing multiple priorities.
  • Own and optimize data architecture, ensuring scalability and efficiency.
  • Partner with Ops Engineering to improve data pipelines and integrations.
  • Manage and optimize AWS Redshift and other key platforms.
  • Define and enforce data governance best practices.
  • Extract insights from customer interactions using AI-driven analytics.
  • Identify trends in feature requests, pain points, and product issues.
  • Develop dashboards to provide actionable insights for stakeholders.
  • Ensure data is structured and available for sales tracking, forecasting, and segmentation.
  • Support revenue modeling and churn risk analysis.
  • Maintain CRM data integrity and enable data-driven sales strategies.
  • Support A/B testing and controlled experiments to optimize product and sales decisions.
  • Develop models to measure customer engagement and sales effectiveness.
  • Build predictive models for customer retention and revenue growth.
  • Work with stakeholders to enable data-driven decision-making.
  • Develop self-service tools and training materials.
  • Promote best practices and data accessibility across teams.

AWSPythonSQLETLData engineeringData visualizationData modelingScriptingData analyticsSaaS

Posted 2 days ago
Apply
Apply

📍 United States, Canada

🧭 Full-Time

💸 150000.0 - 180000.0 USD per year

🔍 SaaS

🏢 Company: Ceros👥 101-250💰 $100,000,000 Private over 4 years agoAdvertisingContent CreatorsContent MarketingGraphic DesignSoftware

  • 8+ years of experience in data engineering, product analytics, or SaaS.
  • Strong experience with SQL, AWS Redshift, and ETL pipelines.
  • Proficiency with BI tools (Looker, Metabase, Mixpanel).
  • Experience with AI-driven analytics and NLP techniques.
  • Strong communication and stakeholder management skills.
  • Highly organized and capable of managing multiple priorities.
  • Experience in SaaS, especially in content creation or enterprise tech.
  • Familiarity with customer feedback and conversational intelligence tools.
  • Hands-on experience with Python or scripting languages for data automation.
  • Own and optimize data architecture, ensuring scalability and efficiency.
  • Partner with Ops Engineering to improve data pipelines and integrations.
  • Manage and optimize AWS Redshift and other key platforms.
  • Define and enforce data governance best practices.
  • Extract insights from customer interactions using AI-driven analytics.
  • Identify trends in feature requests, pain points, and product issues.
  • Develop dashboards to provide actionable insights for stakeholders.
  • Ensure data is structured and available for sales tracking, forecasting, and segmentation.
  • Support revenue modeling and churn risk analysis.
  • Maintain CRM data integrity and enable data-driven sales strategies.
  • Support A/B testing and controlled experiments to optimize product and sales decisions.
  • Develop models to measure customer engagement and sales effectiveness.
  • Build predictive models for customer retention and revenue growth.
  • Work with stakeholders to enable data-driven decision-making.
  • Develop self-service tools and training materials.
  • Promote best practices and data accessibility across teams.

AWSPythonSQLETLData engineeringCommunication SkillsAnalytical SkillsData visualizationStakeholder managementCRMData modelingSaaSA/B testing

Posted 9 days ago
Apply
Apply

📍 AL, AR, AZ, CA (exempt only), CO, CT, FL, GA, ID, IL, IN, IA, KS, KY, MA, ME, MD, MI, MN, MO, MT, NC, NE, NJ, NM, NV, NY, OH, OK, OR, PA, SC, SD, TN, TX, UT, VT, VA, WA, and WI

🧭 Full-Time

🔍 Insurance

🏢 Company: Kin Insurance

  • 10+ years of experience in designing & architecting data systems, warehousing and/or ML Ops platforms
  • Proven experience in the design and architecture of large-scale data systems, including lakehouse and machine learning platforms, using modern cloud-based tools
  • Can communicate effectively with executives and team
  • Architect and implement solutions using Databricks for advanced analytics, data processing, and machine learning workloads
  • Expertise in data architecture and design, for both structured and unstructured data. Expertise in data modeling across transactional, BI and DS usage models
  • Fluency with modern cloud data stacks, SaaS solutions, and evolutionary architecture, enabling flexible, scalable, and cost-effective solutions
  • Expertise with all aspects of data management: data governance, data mastering, metadata management, data taxonomies and ontologies
  • Expertise in architecting and delivering highly-scalable and flexible, cost effective, cloud-based enterprise data solutions
  • Proven ability to develop and implement data architecture roadmaps and comprehensive implementation plans that align with business strategies
  • Experience working data integration tools and Kafka for real-time data streaming to ensure seamless data flow across the organization
  • Expertise in dbt for transformations, pipelines, and data modeling
  • Experience with data analytics, BI, data reporting and data visualization
  • Experience with the insurance domain is a plus
  • Lead the overall data architecture design, including ingestion, storage, management, and machine learning engineering platforms.
  • Implement the architecture and create a vision for how data will flow through the organization using a federated approach.
  • Manage data governance across multiple systems: data mastering, metadata management, data definitions, semantic-layer design, data taxonomies, and ontologies.
  • Architect and deliver highly scalable, flexible, and cost-effective enterprise data solutions that support the development of architecture patterns and standards.
  • Help define the technology strategy and roadmaps for the portfolio of data platforms and services across the organization.
  • Ensure data security and compliance, working within industry regulations.
  • Design and document data architecture at multiple levels across conceptual, logical, and physical views.
  • Provide “hands-on” architectural guidance and leadership throughout the entire lifecycle of development projects.
  • Translate business requirements into conceptual and detailed technology solutions that meet organizational goals.
  • Collaborate with other architects, engineering directors, and product managers to align data solutions with business strategy.
  • Lead proof-of-concept projects to test and validate new tools or architectural approaches.
  • Stay current with industry trends, vendor product offerings, and evolving data technologies to ensure the organization leverages the best available tools.
  • Cross-train peers and mentor team members to share knowledge and build internal capabilities.

AWSLeadershipPythonSQLApache AirflowCloud ComputingETLKafkaKubernetesMachine LearningSnowflakeAlgorithmsData engineeringData scienceData StructuresREST APIPandasSparkTensorflowCommunication SkillsAnalytical SkillsCollaborationCI/CDProblem SolvingMentoringTerraformMicroservicesJSONData visualizationData modelingData analyticsData management

Posted 16 days ago
Apply
Apply

📍 United States, Canada

🧭 Full-Time

🔍 Healthcare

  • 8+ years of experience delivering results-oriented development within a collaborative environment.
  • Previous experience in leading international and distributed teams.
  • Experience with Python, NoSQL databases and Data Lake.
  • Experience in designing and implementing scalable data pipelines using modern data processing frameworks (e.g., Spark, Kafka).Experience setting up cloud-based data pipelines.
  • Experience with serverless data pipelines.
  • Knowledge of cloud platforms such as AWS.
  • Deep understanding of data governance, data quality and data security principles.
  • Previous experience in regulated environments and knowledge of privacy and security considerations in data engineering.
  • Strong problem-solving and analytical skills, with a passion for building robust and efficient data solutions.
  • Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams.
  • Lead and mentor a dynamic team of data engineers, ensuring smooth management and optimization of healthcare data systems and processes.
  • Architect and develop scalable data pipelines and infrastructure to collect, process, and analyze large volumes of healthcare data.
  • Collaborate with data scientists, product teams, and user experience experts to understand data requirements and translate them into technical solutions.
  • Design and optimize data models and databases to ensure efficient data storage, retrieval and analysis.
  • Implement best practices for data quality, data governance, and data security to ensure compliance and privacy.
  • Collaborate with external partners and vendors to integrate data sources and systems effectively.
  • Mentor and provide technical guidance to junior members of the data engineering team.
  • Stay up to date with emerging technologies and industry trends in data engineering and contribute to the continuous improvement of the data infrastructure.

AWSPostgreSQLPythonETLKafkaNosqlSparkTerraform

Posted 25 days ago
Apply
Apply
🔥 Principal Data Engineer (01323)
Posted about 1 month ago

📍 United States

🧭 Full-Time

💸 142771.0 - 225000.0 USD per year

🔍 Media and Analytics

  • Master's degree in Computer Science, Data Science, engineering, mathematics, or a related quantitative field plus 3 years of experience in analytics software solutions.
  • Bachelor's degree in similar fields plus 5 years of experience is also acceptable.
  • 3 years of experience with Python and associated packages including Spark, AWS, S3, Java, JavaScript, and Adobe Analytics.
  • Proficiency in SQL for querying and managing data.
  • Experience in analytics programming languages such as Python (with Pandas).
  • Experience in handling large volumes of data and code management tools like Git.
  • 2 years of experience managing computer program orchestrations and using open-source management platforms like AirFlow.
  • Develop, test, and orchestrate econometric, statistical, and machine learning modules.
  • Conduct unit, integration, and regression testing.
  • Create data processing systems for analytic research and development.
  • Design, document, and present process flows for analytical systems.
  • Partner with Software Engineering for cloud-based solutions.
  • Orchestrate modules via directed acyclic graphs using workflow management systems.
  • Work in an agile development environment.

AWSPythonSQLApache AirflowGitMachine LearningData engineeringRegression testingPandasSpark

Posted about 1 month ago
Apply
Apply
🔥 Principal Data Engineer
Posted 3 months ago

📍 United States

🧭 Full-Time

💸 185000.0 - 215000.0 USD per year

🔍 Media & Entertainment

  • 7+ years of experience in data engineering or software development
  • Expertise in AWS and distributed data technologies (e.g., Spark, Hadoop)
  • Proficiency with programming languages such as Python, PySpark, and JavaScript
  • Experience with large-scale data pipeline design and streaming data
  • Hands-on experience with AWS services like Lambda and S3
  • Strong SQL and data querying skills
  • Architect, design, and oversee development of data pipelines
  • Design and build analytics pipelines and business intelligence solutions
  • Work closely with cross-functional teams on data integration
  • Provide leadership and mentorship to data engineering team
  • Ensure compliance with best practices in data governance and quality
  • Drive data scalability and performance across cloud platforms

AWSPythonSQLApache AirflowCloud ComputingApache KafkaData engineering

Posted 3 months ago
Apply
Apply
🔥 Principal Data Engineer
Posted 3 months ago

📍 Sioux Falls, SD, Scottsdale, AZ, Troy, MI, Franklin, TN, Dallas, TX

🧭 Full-Time

💸 99501.91 - 183764.31 USD per year

🔍 Financial Services

🏢 Company: Pathward, N.A.

  • Bachelor’s degree or equivalent experience.
  • 10+ years delivering scalable, secure, and highly available technical data solutions.
  • 5+ years of experience designing and building Data Engineering pipelines with industry leading technologies such as Talend, Informatica, etc.
  • Extensive SQL experience.
  • Experience with ELT processes for transformation push down and data replication using tools such as Matillion.
  • Experience with Data Visualization tools such as PowerBI, ThoughtSpot, or others.
  • Experience with Python.
  • Experience with SQL Server, SSIS, and No-SQL DBs preferred.
  • 3+ years of complex enterprise experience leading distributed Data Engineering teams.
  • 3+ years of experience in cloud data environments leveraging technologies such as Snowflake and AWS.
  • 3+ years of banking or financial services and products domain experience.
  • Strong data architecture, critical thinking, and problem-solving abilities.
  • Ability to communicate effectively with all levels in the company.
  • Leads a Data Engineering team with responsibility for planning, prioritization, architectural & business alignment, quality, and value delivery.
  • Develops flexible, maintainable, and reusable Data Engineering solutions using standards, best practices, and frameworks with a focus on efficiency and innovation.
  • Solves complex development problems leveraging good design and practical experience.
  • Continually looks at ways to improve existing systems, processes, and performance.
  • Leads and participates in planning and feature/user story analysis by providing feedback and demonstrating an understanding of business needs.
  • Solves business problems by implementing technical solutions based on solid design principles and best practices.
  • Identifies and contributes to opportunities for departmental & team improvements.
  • Documents software, best practices, standards, and frameworks.
  • Mentors staff by providing technical advice, helping with issue resolution, and providing feedback.
  • Contributes as a subject matter expert in technical areas.
  • Keeps up to date with current and future changes in tools, technology, best practices, and industry standards through training and development opportunities.
  • Identifies and leads learning and training opportunities for other team members and staff.

AWSPythonSQLSnowflakeData engineering

Posted 3 months ago
Apply
Apply
🔥 Principal Data Engineer
Posted 3 months ago

📍 United States

💸 210000 - 220000 USD per year

🔍 Healthcare

🏢 Company: Transcarent👥 251-500💰 $126,000,000 Series D 10 months agoPersonal HealthHealth CareSoftware

  • Experienced: 10+ years in data engineering with a strong background in building and scaling data architectures.
  • Technical Expertise: Advanced working knowledge of SQL, relational databases, big data tools (e.g., Spark, Kafka), and cloud-based data warehousing (e.g., Snowflake).
  • Architectural Visionary: Experience in service-oriented and event-based architecture with strong API development skills.
  • Problem Solver: Manage and optimize processes for data transformation, metadata management, and workload management.
  • Collaborative Leader: Strong communication skills to present ideas clearly and lead cross-functional teams.
  • Project Management: Strong organizational skills, capable of leading multiple projects simultaneously.
  • Lead the Design and Implementation: Architect and implement cutting-edge data processing platforms and enterprise-wide data solutions using modern data architecture principles.
  • Scale Data Platform: Develop a scalable Platform for data extraction, transformation, and loading from various sources, ensuring data integrity and accessibility.
  • AI / ML platform: Design and build scalable AI and ML platforms for Transcarent use cases.
  • Collaborate Across Teams: Work with Executive, Product, Clinical, Data, and Design teams to meet their data infrastructure needs.
  • Optimize Data Pipelines: Build and optimize complex data pipelines for high performance and reliability.
  • Innovate and Automate: Create and maintain data tools and pipelines for analytics and data science.
  • Mentor and Lead: Provide leadership and mentorship to the data engineering team.

AWSLeadershipProject ManagementPythonSQLJavaKafkaSnowflakeC++AirflowData engineeringSparkCommunication SkillsProblem SolvingOrganizational skills

Posted 3 months ago
Apply