Apply

Principal Data Engineer

Posted 8 days agoViewed

View full description

💎 Seniority level: Principal, 10+ years

📍 Location: Sioux Falls, SD, Scottsdale, AZ, Troy, MI, Franklin, TN, Dallas, TX

💸 Salary: 99501.91 - 183764.31 USD per year

🔍 Industry: Financial services

🏢 Company: Pathward, N.A.

🗣️ Languages: English

⏳ Experience: 10+ years

🪄 Skills: AWSPythonSQLSnowflakeData engineering

Requirements:
  • Bachelor’s degree or equivalent experience.
  • 10+ years delivering scalable, secure, and highly available technical data solutions.
  • 5+ years designing and building Data Engineering pipelines with technologies like Talend and Informatica.
  • Extensive SQL experience and familiarity with ELT processes using tools like Matillion.
  • Experience with Data Visualization tools such as PowerBI and ThoughtSpot.
  • Proficiency in Python and SQL Server, SSIS, and No-SQL databases.
  • 3+ years leading distributed Data Engineering teams in cloud environments like Snowflake and AWS.
  • Experience in the banking or financial services domain.
Responsibilities:
  • Leads a Data Engineering team with responsibility for planning, prioritization, architectural & business alignment, quality, and value delivery.
  • Develops flexible, maintainable, and reusable Data Engineering solutions using standards, best practices, and frameworks.
  • Solves complex development problems leveraging good design and practical experience.
  • Continuously improves existing systems, processes, and performance.
  • Leads and participates in planning and feature/user story analysis by providing feedback and demonstrating an understanding of business needs.
  • Documents software, best practices, standards, and frameworks.
  • Mentors staff and contributes as a subject matter expert in technical areas.
Apply

Related Jobs

Apply

📍 United States

💸 210000 - 220000 USD per year

🔍 Healthcare

🏢 Company: Transcarent👥 251-500💰 $126,000,000 Series D 8 months agoPersonal HealthHealth CareSoftware

  • Experienced: 10+ years in data engineering with a strong background in building and scaling data architectures.
  • Technical Expertise: Advanced working knowledge of SQL, relational databases, big data tools (e.g., Spark, Kafka), and cloud-based data warehousing (e.g., Snowflake).
  • Architectural Visionary: Experience in service-oriented and event-based architecture with strong API development skills.
  • Problem Solver: Manage and optimize processes for data transformation, metadata management, and workload management.
  • Collaborative Leader: Strong communication skills to present ideas clearly and lead cross-functional teams.
  • Project Management: Strong organizational skills, capable of leading multiple projects simultaneously.

  • Lead the Design and Implementation: Architect and implement cutting-edge data processing platforms and enterprise-wide data solutions using modern data architecture principles.
  • Scale Data Platform: Develop a scalable Platform for data extraction, transformation, and loading from various sources, ensuring data integrity and accessibility.
  • AI / ML platform: Design and build scalable AI and ML platforms for Transcarent use cases.
  • Collaborate Across Teams: Work with Executive, Product, Clinical, Data, and Design teams to meet their data infrastructure needs.
  • Optimize Data Pipelines: Build and optimize complex data pipelines for high performance and reliability.
  • Innovate and Automate: Create and maintain data tools and pipelines for analytics and data science.
  • Mentor and Lead: Provide leadership and mentorship to the data engineering team.

AWSLeadershipProject ManagementPythonSQLJavaKafkaSnowflakeC++AirflowData engineeringSparkCommunication SkillsProblem SolvingOrganizational skills

Posted 19 days ago
Apply
Apply

📍 US

🔍 Artificial Intelligence, B2B Sales

🏢 Company: Seamless.AI👥 501-1000💰 $75,000,000 Series A over 3 years agoSales AutomationArtificial Intelligence (AI)Lead GenerationMachine LearningInformation TechnologySoftware

  • Bachelor's degree in Computer Science, Information Systems, or equivalent work experience.
  • 7+ years of experience as a Data Engineer with a focus on ETL processes.
  • Professional experience with Spark and AWS pipeline development.
  • Strong proficiency in Python and related libraries (e.g., pandas, NumPy, PySpark).
  • Hands-on experience with AWS Glue or similar ETL tools.
  • Solid understanding of data modeling, data warehousing, and architecture principles.
  • Expertise in working with large data sets and distributed computing frameworks.
  • Experience in developing and training machine learning models.
  • Strong proficiency in SQL.
  • Familiarity with data matching and deduplication methodologies.
  • Experience with data governance, security, and privacy practices.
  • Strong problem-solving and analytical skills.
  • Excellent communication and collaboration skills.

  • Design, develop, and maintain robust and scalable ETL pipelines to acquire, transform, and load data from various sources.
  • Collaborate with cross-functional teams to understand data requirements and develop efficient acquisition and integration strategies.
  • Implement data transformation logic using Python and relevant programming languages.
  • Utilize AWS Glue or similar tools for ETL jobs and data catalogs.
  • Optimize and tune ETL processes for performance with large data sets.
  • Apply methodologies for data accuracy, including matching and aggregation.
  • Implement data governance practices for compliance and security.
  • Collaborate with the team to explore new technologies to enhance data processing.

AWSPythonSQLArtificial IntelligenceETLMachine LearningNumpyData engineeringPandasSparkAnalytical SkillsCollaborationOrganizational skillsComplianceData modeling

Posted 19 days ago
Apply
Apply
🔥 Principal Data Engineer
Posted 3 months ago

📍 AZ, CA, CO, FL, GA, KS, KY, IA, ID, IL, IN, MA, ME, MI, MN, MO, NC, NH, NJ, NV, NY, OH, OR, PA, RI, SC, SD, TN, TX, UT, VA, VT, WA, WI

🧭 Full-Time

💸 $201,601 - $336,001 per year

🔍 Logistics and E-commerce

  • Bachelor's or master's degree in computer science, Information Technology, or a related field.
  • 12+ years of experience in data architecture, data engineering, or similar roles, with a focus on cloud technologies and data platforms.
  • Proven experience in designing and scaling highly available data platforms, including data warehouses, data lakes, operational reporting, data science, and integrated data stores.
  • Strong expertise in modern data stack: Cloud data technologies (Databricks, Snowflake), data storage formats (Parquet, etc.), and query engines.
  • Extensive experience in data modeling, ETL processes, and both batch and real-time analytics frameworks.
  • Familiarity with modernizing data architectures and managing complex data migrations.
  • Proficiency in SQL and strong knowledge/proven experience with programming languages such as Python, Java, Scala, and tools like Airflow, dbt, and CI/CD tools (Git, Jenkins, Terraform).
  • Experience with streaming platforms like Kafka and integrating with BI tools like PowerBI, Tableau, or Looker.
  • Advanced analytical and problem-solving skills, with the ability to conduct complex data analysis and present findings in a clear, actionable way.
  • Strong communication and collaboration skills, with the ability to explain complex technical concepts to non-technical stakeholders.
  • Strong management experience, particularly in managing and mentoring technical teams, and driving large-scale, cross-functional data initiatives.
  • Ability to work in a fast-paced, dynamic environment, managing multiple projects effectively and meeting deadlines.
  • A mindset focused on innovation and continuous improvement, with a track record of initiating and running projects that have significantly improved data processes or outcomes.

  • Play a key role in defining and executing the data strategy for the organization, driving innovation, and influencing business outcomes through advanced data solutions.
  • Architect, design, and implement scalable data solutions that support diverse business needs across the organization.
  • Take full ownership of the entire data pipeline, from data ingestion and processing to storage, management, and delivery, ensuring seamless integration and high data quality.
  • Stay ahead of industry trends and emerging technologies, researching and experimenting with new tools, methodologies, and platforms to continually enhance the data infrastructure.
  • Drive the adoption of cutting-edge technologies that align with the company’s strategic goals.
  • Leverage cloud platforms to ensure the scalability, performance, and reliability of our data infrastructure.
  • Optimize databases through indexing, query optimization, and schema design, ensuring high availability and minimal downtime.
  • Run the modernization of existing data architectures, implementing advanced technologies and methodologies.
  • Plan and execute complex migrations to cutting-edge platforms, ensuring minimal disruption and high data integrity.
  • Work closely with various departments, including Data Science, Analytics, Product, Compliance, and Engineering, to design cohesive data models, schemas, and storage solutions.
  • Translate business needs into technical specifications, driving projects that enhance customer experiences and outcomes.
  • Create and set best practices for data ingestion, integration, and access patterns to support both real-time and batch-based consumer data needs.
  • Implement and maintain robust data governance practices to ensure compliance with data security and privacy regulations.
  • Establish and enforce standards for data ingestion, integration, processing, and delivery to ensure data quality and consistency.
  • Mentor and guide junior team members, fostering a culture of continuous improvement and technical excellence.
  • Develop detailed documentation to capture design specifications and processes, ensuring ongoing maintenance is supported.
  • Manage design reviews to verify that proposed solutions effectively resolve platform and stakeholder obstacles while adhering to business and technical requirements.
  • Craft and deliver clear communications that articulate the architectural strategy and its alignment with the company's goals.

PythonSQLData AnalysisETLGitJavaJenkinsKafkaSnowflakeTableauStrategyAirflowData engineeringData scienceCollaborationCI/CDProblem SolvingMentoringTerraformScalaData modeling

Posted 3 months ago
Apply
Apply
🔥 Principal Data Engineer
Posted 3 months ago

📍 United States

🧭 Full-Time

💸 $163,282 - $192,262 per year

🔍 Software/Data Visualization

🏢 Company: Hypergiant👥 101-250💰 Corporate over 5 years agoArtificial Intelligence (AI)Machine LearningInformation TechnologyMilitary

  • 15+ years of professional software development or data engineering experience (12+ with a STEM B.S. or 10+ with a relevant Master's degree).
  • Strong proficiency in Python and familiarity with Java and Bash scripting.
  • Hands-on experience implementing database technologies, messaging systems, and stream computing software (e.g., PostgreSQL, PostGIS, MongoDB, DuckDB, KsqlDB, RabbitMQ).
  • Experience with data fabric development using publish-subscribe models (e.g., Apache NiFi, Apache Pulsar, Apache Kafka and Kafka-based data service architecture).
  • Proficiency with containerization technologies (e.g., Docker, Docker-Compose, RKE2, Kubernetes, and Microk8s).
  • Experience with version control systems (e.g., Git), CI/CD tools (e.g., Jenkins), and collaborative development workflows.
  • Strong knowledge of data modeling and database optimization techniques.
  • Familiarity with data serialization languages (e.g., JSON, GeoJSON, YAML, XML).
  • Excellent problem-solving and analytical skills that have been applied to high visibility, important data engineering projects.
  • Strong communication skills and ability to lead the work of other engineers in a collaborative environment.
  • Demonstrated experience in coordinating team activities, setting priorities, and managing tasks to ensure balanced workloads and effective team performance.
  • Experience managing and mentoring development teams in an Agile environment.
  • Ability to make effective architecture decisions and document them clearly.
  • Must be a US Citizen and eligible to obtain and maintain a US Security Clearance.

  • Develop and continuously improve a data service that underpins cloud-based applications.
  • Support data and database modeling efforts.
  • Contribute to the development and maintenance of reusable component libraries and shared codebase.
  • Participate in the entire software development lifecycle, including requirement gathering, design, development, testing, and deployment, using an agile, iterative process.
  • Collaborate with developers, designers, testers, project managers, product owners, and project sponsors to integrate the data service to end-user applications.
  • Communicate tasking estimation and progress regularly to a development lead and product owner through appropriate tools.
  • Ensure seamless integration between database and messaging systems and the frontend/UI they support.
  • Ensure data quality, reliability, and performance through code reviews and effective testing strategies.
  • Write high-quality code, applying best practices, coding standards, and design patterns.
  • Team with other developers, fostering a culture of continuous learning and professional growth.

DockerLeadershipPostgreSQLPythonSoftware DevelopmentAgileBashDesign PatternsGitJavaJenkinsKafkaKubernetesMongoDBRabbitmqApache KafkaData engineeringData scienceCommunication SkillsAnalytical SkillsCI/CDJSON

Posted 3 months ago
Apply
Apply
🔥 Principal Data Engineer
Posted 4 months ago

📍 United States

🧭 Full-Time

💸 210000 - 220000 USD per year

🔍 Healthcare

  • Experienced: 10+ years of experience in data engineering.
  • Technical Expertise: Advanced working knowledge of SQL, relational databases, and big data tools.
  • Architectural Visionary: Experience in service-oriented and event-based architecture.
  • Problem Solver: Ability to manage and optimize processes for data transformation.
  • Collaborative Leader: Strong communication skills and ability to lead cross-functional teams.
  • Project Management: Strong project management and organizational skills.

  • Lead the Design and Implementation: Architect and implement cutting-edge data processing platforms and enterprise-wide data solutions.
  • Scale Data Platform: Develop a scalable Platform for optimal data extraction, transformation, and loading.
  • AI / ML platform: Design and build scalable AI and ML platforms to support Transcarent use cases.
  • Collaborate Across Teams: Partner with Executive, Product, Clinical, Data, and Design teams.
  • Optimize Data Pipelines: Build and optimize complex data pipelines for performance and reliability.
  • Innovate and Automate: Create and maintain data tools and pipelines for analytics and data science.
  • Mentor and Lead: Provide technical leadership and mentorship to the data engineering team.

AWSLeadershipProject ManagementPythonSQLJavaKafkaSnowflakeC++AirflowData engineeringData scienceSparkCommunication SkillsC (Programming language)

Posted 4 months ago
Apply