Apply

Senior Data Engineer

Posted 3 months agoViewed

View full description

πŸ’Ž Seniority level: Senior, 5+ years of experience as a Data Engineer

πŸ’Έ Salary: 95000.0 - 230000.0 USD per year

πŸ” Industry: Cryptocurrency / Blockchain

🏒 Company: 0xπŸ‘₯ 51-100πŸ’° $70,000,000 Series B almost 3 years agoCryptocurrencyDecentralized Finance (DeFi)EthereumBlockchain

πŸ—£οΈ Languages: English

⏳ Experience: 5+ years of experience as a Data Engineer

Requirements:
  • Passion for the benefits of decentralization and alignment with the 0x mission.
  • Exhibit core values such as doing the right thing, consistently shipping, and creating enduring value.
  • 5+ years of experience as a Data Engineer.
  • 1+ years of experience with Ethereum or Solana.
  • Experience building and operating highly available data pipelines.
  • Experience with Apache Kafka or other data pub/sub systems.
  • Experience with Data Modeling and Architecture.
  • Familiarity with programming, ideally in Python and/or TypeScript/Node/Go.
Responsibilities:
  • Collaborate with the Data Team, Product Managers, and Engineers to enhance data accessibility and usability for decision-making.
  • Develop and maintain efficient, scalable ETL pipelines for real-time and batch data processing.
  • Contribute to the development of 0x Data APIs to support internal and external applications.
  • Develop and maintain data observability processes and standards.
  • Promote a culture of innovation and guide the team in exploring new technologies.
  • Identify and implement process improvements and automation for data management.
  • Mentor team members and foster a collaborative environment.
Apply

Related Jobs

Apply

πŸ“ US, Canada

🧭 Full-Time

πŸ’Έ 95795.0 - 128800.0 USD per year

πŸ” Internet of Things (IoT)

  • BS degree in Computer Science, Statistics, Engineering, or a related quantitative discipline.
  • 6+ years experience in a data engineering and data science-focused role.
  • Proficiency in data manipulation and processing in SQL and Python.
  • Expertise building data pipelines with new API endpoints from their documentation.
  • Proficiency in building ETL pipelines to handle large volumes of data.
  • Demonstrated experience in designing data models at scale.
  • Build and maintain highly reliable computed tables, incorporating data from various sources, including unstructured and highly sensitive data.
  • Access, manipulate, and integrate external datasets with internal data.
  • Build analytical and statistical models to identify patterns, anomalies, and root causes.
  • Leverage SQL and Python to shape and aggregate data.
  • Incorporate generative AI tools into production data pipelines and automated workflows.
  • Collaborate closely with data scientists, data analysts, and Tableau developers to ship top quality analytic products.
  • Champion, role model, and embed Samsara’s cultural principles.

PythonSQLETLData engineeringData scienceData modeling

Posted 1 day ago
Apply
Apply

πŸ“ Spain

πŸ’Έ 80000.0 - 110000.0 EUR per year

πŸ” Financial services

  • 5+ years of professional experience in Data Engineering or similar roles.
  • Proficiency in SQL and DBT for data transformations.
  • Fluency in Python or other modern programming languages.
  • Experience with infrastructure as code languages like Terraform.
  • Knowledge of data modeling, data warehouse technologies, and cloud infrastructures.
  • Experience with AWS or other cloud platforms like Azure or GCP.
  • Ability to provide constructive code reviews.
  • Strong communication and collaboration skills.
  • Work with engineering managers and tech leads to identify and plan projects based on team goals.
  • Collaborate with teams across engineering, analytics, and product to deliver technology for analytical use cases.
  • Write high-quality, understandable code.
  • Review teammates' work and offer feedback.
  • Serve as a technical mentor for other engineers.
  • Promote a respectful and supportive team environment.
  • Participate in on-call rotation.

AWSPythonSQLTerraformData modeling

Posted 2 days ago
Apply
Apply

πŸ“ India

πŸ” Data Engineering

🏒 Company: AryngπŸ‘₯ 11-50ConsultingTrainingAnalytics

  • 8+ years of data engineering experience.
  • 4+ years implementing and managing data solutions with cloud solutions like GCP, AWS, or Azure.
  • Strong proficiency in Python and SQL.
  • Experience with Big Query, Snowflake, Redshift, and DBT.
  • Understanding of data warehousing, data lakes, and cloud concepts.
  • Excellent communication, presentation, and problem-solving skills.
  • A B.S. in computer science or related field.
  • Consulting background is a plus.
  • Implement asynchronous data ingestion and high volume stream data processing.
  • Conduct real-time data analytics using various data engineering techniques.
  • Implement application components using cloud technologies.
  • Assist in defining data pipelines and identifying bottlenecks for data management methodologies.
  • Utilize cutting edge cloud platform solutions from GCP, AWS, and Azure.

AWSPythonSQLGCPHadoopKafkaSnowflakeTableauAirflowAzureData engineeringSpark

Posted 2 days ago
Apply
Apply

🏒 Company: DaCodes

Posted 3 days ago
Apply
Apply

πŸ“ Poland, Spain, United Kingdom

πŸ” Beauty marketplace

🏒 Company: BooksyπŸ‘₯ 501-1000πŸ’° Debt Financing 4 months agoMobile PaymentsMarketplaceSaaSPaymentsMobile AppsWellnessSoftware

  • 5+ years of experience in backend and data engineering, with strong system design skills.
  • Practical proficiency in cloud technologies (ideally GCP), with expertise in tools like BigQuery, Dataflow, Pub/Sub, or similar.
  • Hands-on experience with CI/CD tools (e.g., GitLab CI) and infrastructure as code.
  • Strong focus on data quality, governance, and building scalable, automated workflows.
  • Experience designing self-service data platforms and infrastructure.
  • Proven ability to mentor and support others, fostering data literacy across teams.
  • Design and implement robust data solutions.
  • Enable teams to make informed, data-driven decisions.
  • Ensure data is accessible, reliable, and well-governed.
  • Play a key role in driving growth, innovation, and operational excellence.

GCPData engineeringCI/CDData modeling

Posted 3 days ago
Apply
Apply

πŸ“ United States, Canada

🧭 Regular

πŸ’Έ 125000.0 - 160000.0 USD per year

πŸ” Digital driver assistance services

🏒 Company: AgeroπŸ‘₯ 1001-5000πŸ’° $4,750,000 over 2 years agoAutomotiveInsurTechInformation TechnologyInsurance

  • Bachelor's degree in a technical field and 5+ years or Master's degree with 3+ years of industry experience.
  • Extensive experience with Snowflake or other cloud-based data warehousing solutions.
  • Expertise in ETL/ELT pipelines using tools like Airflow, DBT, Fivetran.
  • Proficiency in Python for data processing and advanced SQL for managing databases.
  • Solid understanding of data modeling techniques and cost management strategies.
  • Experience with data quality frameworks and deploying data solutions in the cloud.
  • Familiarity with version control systems and implementing CI/CD pipelines.
  • Develop and maintain ETL/ELT pipelines to ingest data from diverse sources.
  • Monitor and optimize cloud costs while performing query optimization in Snowflake.
  • Establish modern data architectures including data lakes and warehouses.
  • Apply dimensional modeling techniques and develop transformations using DBT or Spark.
  • Write reusable and efficient code, and develop data-intensive UIs and dashboards.
  • Implement data quality frameworks and observability solutions.
  • Collaborate cross-functionally and document data flows, processes, and architecture.

AWSPythonSQLApache AirflowDynamoDBETLFlaskMongoDBSnowflakeFastAPIPandasCI/CDData modeling

Posted 5 days ago
Apply
Apply

πŸ’Έ 126400.0 - 223100.0 USD per year

πŸ” Financial services

🏒 Company: BlockπŸ‘₯ 1001-5000ElectronicsManufacturing

  • 5+ years of data engineering experience.
  • Experience with database management and cloud computing.
  • Knowledge of data warehousing architecture and dimensional modeling.
  • End-to-end ETL pipeline development using SQL and Python.
  • Experience with BI visualization tools like Looker or Tableau.
  • Familiarity with cloud-based services like Snowflake, Redshift, or Azure.
  • Be the expert in building the data foundation that powers BI and visualization tools.
  • Partner with various teams to translate requirements into scalable data pipelines.
  • Develop and manage BI data pipelines and centralized data warehouse with curated datasets.
  • Perform ad hoc analysis and design dashboards for stakeholders.
Posted 5 days ago
Apply
Apply

πŸ“ Brazil, Argentina, Peru, Colombia, Uruguay

πŸ” AdTech

🏒 Company: Workana Premium

  • 6+ years of experience in data engineering or related roles, preferably within the AdTech industry.
  • Expertise in SQL and experience with relational databases such as BigQuery and SpannerDB or similar.
  • Experience with GCP services, including Dataflow, Pub/Sub, and Cloud Storage.
  • Experience building and optimizing ETL/ELT pipelines in support of audience segmentation and analytics use cases.
  • Experience with Docker and Kubernetes for containerization and orchestration.
  • Familiarity with message queues or event-streaming tools, such as Kafka or Pub/Sub.
  • Knowledge of data modeling, schema design, and query optimization for performance at scale.
  • Programming experience in languages like Python, Go, or Java for data engineering tasks.
  • Build and optimize data pipelines and ETL/ELT processes to support AdTech products: Insights, Activation, and Measurement.
  • Leverage GCP tools like BigQuery, SpannerDB, and Dataflow to process and analyze real-time consumer-permissioned data.
  • Design scalable and robust data solutions to power audience segmentation, targeted advertising, and outcome measurement.
  • Develop and maintain APIs to facilitate data sharing and integration across the platform’s products.
  • Optimize database and query performance to ensure efficient delivery of advertising insights and analytics.
  • Work with event-driven architectures using tools like Pub/Sub or Kafka to ensure seamless data processing.
  • Proactively monitor and troubleshoot issues to maintain data accuracy, security, and performance.
  • Drive innovation by identifying opportunities to enhance the platform’s capabilities in audience targeting and measurement.

DockerPythonSQLETLGCPJavaKafkaKubernetesGoData modeling

Posted 9 days ago
Apply
Apply

πŸ“ Singapore

πŸ” Tech-enabled services

🏒 Company: SleekπŸ‘₯ 251-500πŸ’° $5,000,000 Debt Financing 4 months agoAccountingService IndustryLegalProfessional Services

  • 5+ years in data engineering, software engineering, or a related field.
  • Proficiency in working with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra).
  • Familiarity with big data frameworks like Hadoop, Hive, Spark, BigQuery, etc.
  • Strong expertise in programming languages such as Python, NodeJS, etc.
  • Advanced knowledge of cloud platforms (AWS, or GCP) and their associated data services.
  • Expertise in modern data warehouses like BigQuery, Snowflake or Redshift, etc.
  • Expertise in version control systems (e.g., Git) and CI/CD pipelines.
  • Excellent problem-solving abilities, attention to detail, and strong communication skills.
  • Design, implement, and optimize robust, scalable ETL/ELT pipelines to process large volumes of structured and unstructured data.
  • Develop and maintain conceptual, logical, and physical data models to support analytics and reporting requirements.
  • Architect, deploy, and maintain cloud-based data platforms (e.g., AWS, GCP).
  • Work closely with data analysts, business owners, and stakeholders to understand data requirements and deliver reliable solutions.
  • Ensure data quality, consistency, and security through robust validation and monitoring frameworks.
  • Monitor, troubleshoot, and optimize the performance of data systems and pipelines.
  • Stay up to date with the latest industry trends and emerging technologies to continuously improve data engineering practices.

AWSNode.jsPostgreSQLPythonETLGCPGitHadoopMongoDBMySQLSnowflakeCassandraData engineeringSparkCI/CDData modeling

Posted 11 days ago
Apply
Apply

πŸ“ United States of America

🧭 Full-Time

πŸ’Έ 110000.0 - 160000.0 USD per year

πŸ” Insurance industry

🏒 Company: Verikai_External

  • Bachelor's degree or above in Computer Science, Data Science, or a related field.
  • At least 5 years of relevant experience.
  • Proficient in SQL, Python, and data processing frameworks such as Spark.
  • Hands-on experience with AWS services including Lambda, Athena, Dynamo, Glue, Kinesis, and Data Wrangler.
  • Expertise in handling large datasets using technologies like Hadoop and Spark.
  • Experience working with PII and PHI under HIPAA constraints.
  • Strong commitment to data security, accuracy, and compliance.
  • Exceptional ability to communicate complex technical concepts to stakeholders.
  • Design, build, and maintain robust ETL processes and data pipelines for large-scale data ingestion and transformation.
  • Manage third-party data sources and customer data to ensure clean and deduplicated datasets.
  • Develop scalable data storage systems using cloud platforms like AWS.
  • Collaborate with data scientists and product teams to support data needs.
  • Implement data validation and quality checks, ensuring accuracy and compliance with regulations.
  • Integrate new data sources to enhance the data ecosystem and document data strategies.
  • Continuously optimize data workflows and research new tools for the data infrastructure.

AWSPythonSQLDynamoDBETLSpark

Posted 11 days ago
Apply

Related Articles

Posted 6 months ago

Insights into the evolving landscape of remote work in 2024 reveal the importance of certifications and continuous learning. This article breaks down emerging trends, sought-after certifications, and provides practical solutions for enhancing your employability and expertise. What skills will be essential for remote job seekers, and how can you navigate this dynamic market to secure your dream role?

Posted 6 months ago

Explore the challenges and strategies of maintaining work-life balance while working remotely. Learn about unique aspects of remote work, associated challenges, historical context, and effective strategies to separate work and personal life.

Posted 6 months ago

Google is gearing up to expand its remote job listings, promising more opportunities across various departments and regions. Find out how this move can benefit job seekers and impact the market.

Posted 6 months ago

Learn about the importance of pre-onboarding preparation for remote employees, including checklist creation, documentation, tools and equipment setup, communication plans, and feedback strategies. Discover how proactive pre-onboarding can enhance job performance, increase retention rates, and foster a sense of belonging from day one.

Posted 6 months ago

The article explores the current statistics for remote work in 2024, covering the percentage of the global workforce working remotely, growth trends, popular industries and job roles, geographic distribution of remote workers, demographic trends, work models comparison, job satisfaction, and productivity insights.