Apply

Staff Data Engineer

Posted 4 days agoViewed

View full description

💎 Seniority level: Staff, 8+ years

📍 Location: United States, Canada

💸 Salary: 158000.0 - 239000.0 USD per year

🔍 Industry: Software Development

🏢 Company: 1Password

🗣️ Languages: English

⏳ Experience: 8+ years

🪄 Skills: AWSPythonSQLETLGCPJavaKubernetesMySQLSnowflakeAlgorithmsApache KafkaAzureData engineeringData StructuresPostgresRDBMSSparkCI/CDRESTful APIsMentoringScalaData visualizationData modelingSoftware EngineeringData analyticsData management

Requirements:
  • Minimum of 8+ years of professional software engineering experience.
  • Minimum of 7 years technical engineering experience building data processing applications (batch and streaming) with coding in languages.
  • In-depth, hands-on experience on extensible data modeling, query optimizations and work in Java, Scala, Python, and related technologies.
  • Experience in data modeling across external facing product insights and business processes, such as revenue/sales operations, finance, and marketing.
  • Experience with Big Data query engines such as Hive, Presto, Trino, Spark.
  • Experience with data stores such as Redshift, MySQL, Postgres, Snowflake, etc.
  • Experience using Realtime technologies like Apache Kafka, Kinesis, Flink, etc.
  • Experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP with extensive use of datastores like RDBMS, key-value stores, etc.
  • Experience leveraging distributed systems at scale and systems knowledge on infrastructure hardware, resources bare-metal hosts to containers to networking.
Responsibilities:
  • Design, develop, and automate large-scale, high-performance batch and streaming data processing systems to drive business growth and enhance product experience.
  • Build data engineering strategy that supports a rapidly growing tech company and aligns with the priorities across our product strategy and internal business organizations’ desire to leverage data for more competitive advantages.
  • Build scalable data pipelines using best-in-class software engineering practices.
  • Develop optimal data models for storage and retrieval, meeting critical product and business requirements.
  • Establish and execute short and long-term architectural roadmaps in collaboration with Analytics, Data Platform, Business Systems, Engineering, Privacy and Security.
  • Lead efforts on continuous improvement to the efficiency and flexibility of the data, platform, and services.
  • Mentor Analytics & Data Engineers on best practices, standards and forward-looking approaches on building robust, extensible and reusable data solutions.
  • Influence and evangelize high standard of code quality, system reliability, and performance.
Apply

Related Jobs

Apply
🔥 Staff Data Engineer
Posted 3 days ago

📍 United States

🧭 Full-Time

💸 160000.0 - 230000.0 USD per year

🔍 Daily Fantasy Sports

  • 7+ years of experience in a data Engineering, or data-oriented software engineering role creating and pushing end-to-end data engineering pipelines.
  • 3+ years of experience acting as technical lead and providing mentorship and feedback to junior engineers.
  • Extensive experience building and optimizing cloud-based data streaming pipelines and infrastructure.
  • Extensive experience exposing real-time predictive model outputs to production-grade systems leveraging large-scale distributed data processing and model training.
  • Experience in most of the following: SQL/NoSQL databases/warehouses: Postgres, BigQuery, BigTable, Materialize, AlloyDB, etc
  • Replication/ELT services: Data Stream, Hevo, etc.
  • Data Transformation services: Spark, Dataproc, etc
  • Scripting languages: SQL, Python, Go.
  • Cloud platform services in GCP and analogous systems: Cloud Storage, Cloud Compute Engine, Cloud Functions, Kubernetes Engine etc.
  • Data Processing and Messaging Systems: Kafka, Pulsar, Flink
  • Code version control: Git
  • Data pipeline and workflow tools: Argo, Airflow, Cloud Composer.
  • Monitoring and Observability platforms: Prometheus, Grafana, ELK stack, Datadog
  • Infrastructure as Code platforms: Terraform, Google Cloud Deployment Manager.
  • Other platform tools such as Redis, FastAPI, and Streamlit.
  • Excellent organizational, communication, presentation, and collaboration experience with organizational technical and non-technical teams
  • Graduate degree in Computer Science, Mathematics, Informatics, Information Systems or other quantitative field
  • Enhance the capabilities of our existing Core Data Platform and develop new integrations with both internal and external APIs within the Data organization.
  • Develop and maintain advanced data pipelines and transformation logic using Python and Go, ensuring efficient and reliable data processing.
  • Collaborate with Data Scientists and Data Science Engineers to support the needs of advanced ML development.
  • Collaborate with Analytics Engineers to enhance data transformation processes, streamline CI/CD pipelines, and optimize team collaboration workflows Using DBT.
  • Work closely with DevOps and Infrastructure teams to ensure the maturity and success of the Core Data platform.
  • Guide teams in implementing and maintaining comprehensive monitoring, alerting, and documentation practices, and coordinate with Engineering teams to ensure continuous feature availability.
  • Design and implement Infrastructure as Code (IaC) solutions to automate and streamline data infrastructure deployment, ensuring scalable, consistent configurations aligned with data engineering best practices.
  • Build and maintain CI/CD pipelines to automate the deployment of data solutions, ensuring robust testing, seamless integration, and adherence to best practices in version control, automation, and quality assurance.
  • Experienced in designing and automating data governance workflows and tool integrations across complex environments, ensuring data integrity and protection throughout the data lifecycle.
  • Serve as a Staff Engineer within the broader PrizePicks technology organization by staying current with emerging technologies, implementing innovative solutions, and sharing knowledge and best practices with junior team members and collaborators.
  • Ensure code is thoroughly tested, effectively integrated, and efficiently deployed, in alignment with industry best practices for version control, automation, and quality assurance.
  • Mentor and support junior engineers by providing guidance, coaching and educational opportunities
  • Provide on-call support as part of a shared rotation between the Data and Analytics Engineering teams to maintain system reliability and respond to critical issues.

LeadershipPythonSQLCloud ComputingETLGCPGitKafkaKubernetesAirflowData engineeringGoPostgresREST APISparkCI/CDMentoringDevOpsTerraformData visualizationData modelingScripting

Posted 3 days ago
Apply
Apply
🔥 Staff Data Engineer
Posted 4 days ago

📍 Boston, MA; Vancouver, BC; Chicago, IL; and Vancouver, WA.

🧭 Full-Time

💸 200000.0 - 228000.0 USD per year

🔍 Software Development

🏢 Company: Later👥 1-10Consumer ElectronicsiOSAppsSoftware

  • 10+ years of experience in data engineering, software engineering, or related fields.
  • Proven experience leading the technical strategy and execution of large-scale data platforms.
  • Expertise in cloud technologies (Google Cloud Platform, AWS, Azure) with a focus on scalable data solutions (BigQuery, Snowflake, Redshift, etc.).
  • Strong proficiency in SQL, Python, and distributed data processing frameworks (Apache Spark, Flink, Beam, etc.).
  • Extensive experience with streaming data architectures using Kafka, Flink, Pub/Sub, Kinesis, or similar technologies.
  • Expertise in data modeling, schema design, indexing, partitioning, and performance tuning for analytical workloads, including data governance (security, access control, compliance: GDPR, CCPA, SOC 2)
  • Strong experience designing and optimizing scalable, fault-tolerant data pipelines using workflow orchestration tools like Airflow, Dagster, or Dataflow.
  • Ability to lead and influence engineering teams, drive cross-functional projects, and align stakeholders towards a common data vision.
  • Experience mentoring senior and mid-level data engineers to enhance team performance and skill development.
  • Lead the design and evolution of a scalable data architecture that meets analytical, machine learning, and operational needs.
  • Architect and optimize data pipelines for batch and real-time data processing, ensuring efficiency and reliability.
  • Implement best practices for distributed data processing, ensuring scalability, performance, and cost-effectiveness of data workflows.
  • Define and enforce data governance policies, implement automated validation checks, and establish monitoring frameworks to maintain data integrity.
  • Ensure data security and compliance with industry regulations by designing appropriate access controls, encryption mechanisms, and auditing processes.
  • Drive innovation in data engineering practices by researching and implementing new technologies, tools, and methodologies.
  • Work closely with data scientists, engineers, analysts, and business stakeholders to understand data requirements and deliver impactful solutions.
  • Develop reusable frameworks, libraries, and automation tools to improve efficiency, reliability, and maintainability of data infrastructure.
  • Guide and mentor data engineers, fostering a high-performing engineering culture through best practices, peer reviews, and knowledge sharing.
  • Establish and monitor SLAs for data pipelines, proactively identifying and mitigating risks to ensure high availability and reliability.

AWSLeadershipPythonSQLCloud ComputingETLGCPKafkaKubernetesSnowflakeAirflowAzureData engineeringCommunication SkillsAnalytical SkillsProblem SolvingMentoringDevOpsData visualizationData modelingData analyticsData management

Posted 4 days ago
Apply
Apply
🔥 Staff Data Engineer
Posted 11 days ago

📍 United States

🧭 Full-Time

💸 160000.0 - 230000.0 USD per year

🔍 Daily Fantasy Sports

🏢 Company: PrizePicks👥 101-250💰 Corporate about 2 years agoGamingFantasy SportsSports

  • 7+ years of experience in a data Engineering, or data-oriented software engineering role creating and pushing end-to-end data engineering pipelines.
  • 3+ years of experience acting as technical lead and providing mentorship and feedback to junior engineers.
  • Extensive experience building and optimizing cloud-based data streaming pipelines and infrastructure.
  • Extensive experience exposing real-time predictive model outputs to production-grade systems leveraging large-scale distributed data processing and model training.
  • Experience in most of the following:
  • Excellent organizational, communication, presentation, and collaboration experience with organizational technical and non-technical teams
  • Graduate degree in Computer Science, Mathematics, Informatics, Information Systems or other quantitative field
  • Enhance the capabilities of our existing Core Data Platform and develop new integrations with both internal and external APIs within the Data organization.
  • Develop and maintain advanced data pipelines and transformation logic using Python and Go, ensuring efficient and reliable data processing.
  • Collaborate with Data Scientists and Data Science Engineers to support the needs of advanced ML development.
  • Collaborate with Analytics Engineers to enhance data transformation processes, streamline CI/CD pipelines, and optimize team collaboration workflows Using DBT.
  • Work closely with DevOps and Infrastructure teams to ensure the maturity and success of the Core Data platform.
  • Guide teams in implementing and maintaining comprehensive monitoring, alerting, and documentation practices, and coordinate with Engineering teams to ensure continuous feature availability.
  • Design and implement Infrastructure as Code (IaC) solutions to automate and streamline data infrastructure deployment, ensuring scalable, consistent configurations aligned with data engineering best practices.
  • Build and maintain CI/CD pipelines to automate the deployment of data solutions, ensuring robust testing, seamless integration, and adherence to best practices in version control, automation, and quality assurance.
  • Experienced in designing and automating data governance workflows and tool integrations across complex environments, ensuring data integrity and protection throughout the data lifecycle
  • Serve as a Staff Engineer within the broader PrizePicks technology organization by staying current with emerging technologies, implementing innovative solutions, and sharing knowledge and best practices with junior team members and collaborators.
  • Ensure code is thoroughly tested, effectively integrated, and efficiently deployed, in alignment with industry best practices for version control, automation, and quality assurance.
  • Mentor and support junior engineers by providing guidance, coaching and educational opportunities
  • Provide on-call support as part of a shared rotation between the Data and Analytics Engineering teams to maintain system reliability and respond to critical issues.

AWSBackend DevelopmentDockerLeadershipPythonSQLApache AirflowCloud ComputingETLGitKafkaKubernetesRabbitmqAlgorithmsApache KafkaData engineeringData StructuresGoPostgresSparkCommunication SkillsAnalytical SkillsCollaborationCI/CDProblem SolvingRESTful APIsMentoringLinuxDevOpsTerraformExcellent communication skillsStrong communication skillsData visualizationData modelingScriptingSoftware EngineeringData analyticsData management

Posted 11 days ago
Apply
Apply
🔥 Staff Data Engineer
Posted 23 days ago

📍 AL, AR, AZ, CA, CO, CT, FL, GA, ID, IL, IN, IA, KS, KY, MA, ME, MD, MI, MN, MO, MT, NC, NE, NJ, NM, NV, NY, OH, OK, OR, PA, SC, SD, TN, TX, UT, VT, VA, WA, WI

🧭 Full-Time

🔍 Insurance

🏢 Company: Kin Insurance

  • 7+ years of hands-on data engineering experience related to: Data structures and cloud platform environments and best practices (AWS strongly preferred, Azure, or GCP)
  • ETL performance tuning and cost optimization
  • Data lake and lakehouse patterns including open table formats (e.g. Iceberg, Hudi, Delta)
  • Proficiency in Python (Pandas, NumPy, etc.) and SQL for advanced data processing and querying
  • Experience influencing technical strategy, optimizing systems for scale, and reviewing team designs
  • Expertise in distributed data processing/storage (e.g., Apache Spark, Kafka, Hadoop, or similar)
  • Excellent communication skills and ability to explain complex concepts clearly and concisely
  • Detail-oriented with strong data intuition and a passion for data quality
  • Proven ability to model data and build production-ready ETL pipelines handling TBs of data.
  • Great time management and prioritization skills
  • Designing and developing scalable data pipelines and models for downstream analytics and reporting
  • Leading and collaborating with a cross-functional project team to implement data validation, QA standards, and effective data lifecycle management
  • Optimizing pipeline performance, cost, and data quality in a large-scale data environment
  • Migrating data warehouse (DBT, Redshift) architecture to Lakehouse (e.g: S3, Glue, Databricks, Unity catalog) architecture
  • Mentoring data engineers and promoting best practices in software engineering, documentation, and metadata management
  • Ensuring data security and compliance with regulations (e.g., GDPR, CCPA, GLBA) through robust pipeline design and access monitoring
  • Translating ambiguous business requirements into technical solutions using marketing domain knowledge

AWSPythonSQLCloud ComputingETLKafkaData engineeringData StructuresPandasMentoringTime ManagementExcellent communication skillsData visualizationData modelingData management

Posted 23 days ago
Apply
Apply

📍 North America, Europe

💸 220000.0 - 270000.0 USD per year

🔍 AI Research

🏢 Company: Runway

  • 5+ years of industry experience in data engineering, analytics engineering, or similar roles
  • Strong proficiency in SQL and experience with modern data warehousing solutions (e.g., Snowflake, BigQuery, Redshift)
  • Experience designing and implementing ETL/ELT pipelines using tools like dbt, Airbyte, or similar.
  • Experience with Python or another programming language for data manipulation and automation
  • Knowledge of data privacy and data security best practices
  • Ability to translate business requirements into technical specifications and data models
  • Own the data pipeline, including the ingestion, storage, transformation, and serving of data
  • Build and optimize ETL processes to ensure data reliability, accuracy, and accessibility
  • Design, implement, and maintain a modern data warehouse architecture using industry best practices
  • Develop data models, tables, and views that empower teams to self-serve their analytics needs
  • Collaborate with ML Engineers, infrastructure engineers, and product managers to identify data requirements, build efficient data delivery systems, and create a unified data foundation that supports both technical innovation and business growth

AWSPostgreSQLPythonSQLCloud ComputingETLGCPSnowflakeAirflowAzureData engineeringData modelingData analytics

Posted 26 days ago
Apply
Apply
🔥 Staff Data Engineer (Remote)
Posted about 2 months ago

📍 United States

🧭 Full-Time

🔍 Medical Technology

🏢 Company: external_career_site_usa

  • 7+ years of experience programming in SQL
  • 5+ years programming in Snowflake
  • 3+ years working with FiveTran
  • 3+ years of experience in ETL/data movement including both batch and real-time data transmission using dbt
  • 5+ years of experience data modeling including normalizing and dimensional modelling
  • 5+ years of experience optimizing performance in ETL and reporting layers
  • 3+ years of hands-on development and deployment experience with Azure cloud using .NET, T-SQL, Azure SQL, Azure Storage, Azure Data Factory, Cosmos DB, GitHub, Azure DevOps, and CI/CD pipelines
  • 3+ years with APIs, microservices
  • 3+ years of experience programming in Python/Java/C#
  • Build next generation distributed streaming and data pipelines and analytics data stores using streaming frameworks (Kafka, Spark Streaming etc.) using Programming languages like Python and ELT tools like Fivetran, DBT etc.
  • Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance.
  • Develop and deploy structured, semi-structured, and unstructured data storage models such as data lake and dimensional modeling
  • Evaluate and define functional requirements for data and analytical solutions
  • Monitor system performance, identify issues related to performance, scalability, and reliability, and design solutions to remediate
  • Partner with IT, Data Governance, and subject matter experts to create, automate, and deploy effective data quality monitors.
  • Develop and maintain enterprise data dictionary of data warehouse transformations and business rules
  • Ensure healthcare information security best practice controls are in place and they adhere to HIPAA utilizing a common control framework (i.e. NIST, HITRUST)
  • Implement master data management (MDM) solutions to collect, maintain, and leverage data for common business entities
  • Implement metadata management solutions to collection, maintain, and leverage application and system metadata

PythonSQLETLJavaKafkaSnowflakeC#TableauAzureData engineeringCI/CDRESTful APIsTerraformMicroservicesData visualizationData modeling

Posted about 2 months ago
Apply
Apply
🔥 Staff Data Engineer
Posted 3 months ago

📍 United States

🧭 Full-Time

🔍 Software Development

🏢 Company: Life360👥 251-500💰 $33,038,258 Post-IPO Equity over 2 years ago🫂 Last layoff over 2 years agoAndroidFamilyAppsMobile AppsMobile

  • Minimum 7 years of experience working with high volume data infrastructure.
  • Experience with Databricks and AWS.
  • Experience with dbt.
  • Experience with job orchestration tooling like Airflow.
  • Proficient programming in Python.
  • Proficient with SQL and the ability to optimize complex queries.
  • Proficient with large-scale data processing using Spark and/or Presto/Trino.
  • Proficient in data modeling and database design.
  • Experience with streaming data with a tool like Kinesis or Kafka.
  • Experience working with high volume event based data architecture like Amplitude and Braze.
  • Experience in modern development lifecycle including Agile methodology, CI/CD, automated deployments using Terraform, GitHub Actions, etc.
  • Knowledge and proficiency in the latest open source and data frameworks, modern data platform tech stacks and tools.
  • Always learning and staying up to speed with the fast moving data world.
  • You have good communication and collaboration skills and can work independently.
  • BS in Computer Science, Software Engineering, Mathematics, or equivalent experience.
  • Design, implement, and manage scalable data processing platforms used for real-time analytics and exploratory data analysis.
  • Manage our financial data from ingestion through ETL to storage and batch processing.
  • Automate, test and harden all data workflows.
  • Architect logical and physical data models to ensure the needs of the business are met.
  • Collaborate across the data teams, engineering, data science, and analytics, to understand their needs, while applying engineering best practices.
  • Architect and develop systems and algorithms for distributed real-time analytics and data processing.
  • Implement strategies for acquiring data to develop new insights.
  • Mentor junior engineers, imparting best practices and institutionalizing efficient processes to foster growth and innovation within the team.
  • Champion data engineering best practices and institutionalizing efficient processes to foster growth and innovation within the team.

AWSProject ManagementPythonSQLApache AirflowETLKafkaAlgorithmsData engineeringData StructuresREST APISparkCommunication SkillsAnalytical SkillsCollaborationCI/CDProblem SolvingAgile methodologiesMentoringTerraformData visualizationTechnical supportData modelingData analyticsData managementDebugging

Posted 3 months ago
Apply
Apply
🔥 Staff Data Engineer
Posted 5 months ago

📍 United States

🔍 Cyber security

🏢 Company: BeyondTrust👥 1001-5000💰 Private about 4 years agoCloud ComputingSecurityCloud SecurityCyber SecuritySoftware

  • Strong programming and technology knowledge in cloud data processing.
  • Previous experience working in matured data lakes.
  • Strong data modelling skills for analytical workloads.
  • Spark (or equivalent parallel processing framework) experience is needed; existing Databricks knowledge is a plus.
  • Interest and aptitude for cybersecurity; interest in identity security is highly preferred.
  • Technical understanding of underlying systems and computation minutiae.
  • Experience working with distributed systems and data processing on object stores.
  • Ability to work autonomously.
  • Optimize data workloads at a software level by improving processing efficiency.
  • Develop new data processing routes to remove redundancy or reduce transformation overhead.
  • Monitor and maintain existing data workflows.
  • Use observability best practices to ensure pipeline performance.
  • Perform complex transformations on both real time and batch data assets.
  • Create new ML/Engineering solutions to tackle existing issues in the cybersecurity space.
  • Leverage CI/CD best practices to effectively develop and release source code.

PythonSparkCI/CDData modeling

Posted 5 months ago
Apply
Apply

📍 Paris, New York, San Francisco, Sydney, Madrid, London, Berlin

🔍 Communication technology

  • Passionate about data engineering.
  • Experience in designing and developing data infrastructure.
  • Technical skills to solve complex challenges.
  • Play a crucial role in designing, developing, and maintaining data infrastructure.
  • Collaborate with teams across the company to solve complex challenges.
  • Improve operational efficiency and lead business towards strategic goals.
  • Contribute to engineering efforts that enhance customer journey.

AWSPostgreSQLPythonSQLApache AirflowETLData engineering

Posted 7 months ago
Apply