Apply

Sr. Data Engineer

Posted 5 months agoViewed

View full description

πŸ’Ž Seniority level: Senior, 5 years

πŸ“ Location: United States, MST, NOT STATED

🏒 Company: Two95 International Inc.

⏳ Experience: 5 years

πŸͺ„ Skills: AWSAmazon Web ServicesData engineering

Requirements:
  • Bachelor’s degree in Computer Science, Computer Information Systems, Engineering, Statistics, or a closely related field (foreign education equivalent accepted).
  • Experience with AWS services for data and analytics.
  • 5 years of experience in data ingestion, extraction, and integration.
  • 5+ years of hands-on experience with the Mark Logic framework.
Responsibilities:
  • The role focuses on data ingestion, extraction, and integration.
  • Utilizing AWS services for data and analytics.
  • Implementing solutions using the Mark Logic framework.
Apply

Related Jobs

Apply
πŸ”₯ Sr. Data Engineer (GC25001)
Posted about 15 hours ago

πŸ“ United States

🧭 Full-Time

πŸ’Έ 150363.0 - 180870.0 USD per year

πŸ” Software Development

  • At least a Bachelors Degree or foreign equivalent in Computer Science, Computer Engineering, Electrical and Electronics Engineering, or a closely related technical field, and at least five (5) years of post-bachelor’s, progressive experience writing shell scripts; validating data; and engaging in data wrangling.
  • Experience must include at least three (3) years of experience debugging data; transforming data into Microsoft SQL server; developing processes to import data into HDFS using Sqoop; and using Java, UNIX Shell Scripts, and Python.
  • Experience must also include at least one (1) year of experience developing Hive scripts for data transformation on data lake projects; converting Hive scripts to Pyspark applications; automating in Hadoop; and implementing CI/CD pipelines.
  • Design, develop, test, and implement Big Data technical solutions.
  • Recommend the right technologies and solutions for a given use case, from the application layer to infrastructure.
  • Lead the delivery of compiling and installing database systems, integrating data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures.
  • Drive solution architecture and perform deployments of data pipelines and applications.
  • Author DDL and DML SQL spanning technical tacks.
  • Develop data transformation code and highly complex provisioning pipelines.
  • Ingest data from relational databases.
  • Execute automation strategy.

AWSPythonSQLHadoopJavaKafkaSnowflakeData engineeringSparkCI/CDScalaScriptingDebugging

Posted about 15 hours ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 150363.0 - 180870.0 USD per year

πŸ” Software Development

🏒 Company: phDataπŸ‘₯ 501-1000πŸ’° $2,499,997 Seed about 7 years agoInformation ServicesAnalyticsInformation Technology

  • At least a Bachelors Degree or foreign equivalent in Computer Science, Computer Engineering, Electrical and Electronics Engineering, or a closely related technical field, and at least five (5) years of post-bachelor’s, progressive experience writing shell scripts; validating data; and engaging in data wrangling.
  • Experience must include at least three (3) years of experience debugging data; transforming data into Microsoft SQL server; developing processes to import data into HDFS using Sqoop; and using Java, UNIX Shell Scripts, and Python.
  • Experience must also include at least one (1) year of experience developing Hive scripts for data transformation on data lake projects; converting Hive scripts to Pyspark applications; automating in Hadoop; and implementing CI/CD pipelines.
  • Design, develop, test, and implement Big Data technical solutions.
  • Recommend the right technologies and solutions for a given use case, from the application layer to infrastructure.
  • Lead the delivery of compiling and installing database systems, integrating data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures.
  • Drive solution architecture and perform deployments of data pipelines and applications.
  • Author DDL and DML SQL spanning technical tacks.
  • Develop data transformation code and highly complex provisioning pipelines.
  • Ingest data from relational databases.
  • Execute automation strategy.

AWSPythonSQLETLHadoopJavaKafkaSnowflakeData engineeringSparkCI/CDLinuxScala

Posted 7 days ago
Apply
Apply
πŸ”₯ Sr Data Engineer
Posted about 2 months ago

πŸ“ United States, Europe, India

πŸ” SaaS

  • Extensive experience in developing data and analytics applications in geographically distributed teams
  • Hands-on experience in using modern architectures and frameworks, structured, semi-structured and unstructured data, and programming with Python
  • Hands-on SQL knowledge and experience with relational databases such as MySQL, PostgreSQL, and others
  • Hands-on ETL knowledge and experience
  • Knowledge of commercial data platforms (Databricks, Snowflake) or cloud data warehouses (Redshift, BigQuery)
  • Knowledge of data catalog and MDM tooling (Atlan, Alation, Informatica, Collibra)
  • CICD pipeline for continuous deployment (CloudFormation template)
  • Knowledge of how machine learning / A.I. workloads are implemented in batch and streaming, including the preparing of datasets, training models, and using pre-trained models
  • Exposure to software engineering processes that can be applied to Data Ecosystems
  • Excellent analytical and troubleshooting skills
  • Excellent communication skills
  • Excellent English (both verbal and written)
  • BS. in Computer Science or equivalent
  • Design and develop our best-in-class cloud platform, working on all parts of the code stack from front-end, REST and asynchronous APIs, back-end application logic, SQL/NoSQL databases and integrations with external systems
  • Develop solutions across the data and analytics stack from ETL and Streaming data
  • Design and develop reusable libraries
  • Enhance strong processes in Data Ecosystem
  • Write unit and integration tests

PythonSQLApache AirflowCloud ComputingETLMachine LearningSnowflakeAlgorithmsApache KafkaData engineeringData StructuresCommunication SkillsAnalytical SkillsCI/CDRESTful APIsDevOpsMicroservicesExcellent communication skillsData visualizationData modelingData analyticsData management

Posted about 2 months ago
Apply
Apply

πŸ“ U.S.

πŸ’Έ 142500.0 - 155000.0 USD per year

πŸ” Music technology

🏒 Company: SpliceπŸ‘₯ 101-250πŸ’° $55,000,000 Series D about 4 years agoMedia and EntertainmentMusicMachine LearningSoftware

  • 5+ years of experience building scalable and durable software.
  • Demonstrated mastery of Python, SQL, and Unix fundamentals.
  • Operational excellence in maintaining Data Warehouses such as GCP BigQuery or AWS RedShift.
  • Strong familiarity with data transformation frameworks like sqlmesh or dbt.
  • Experience with business intelligence platforms or data visualization frameworks like Looker, Hashtable, or Observable.
  • Strong debugging skills, especially with distributed systems.
  • Experience building supporting Cloud Infrastructure with Google Cloud Platform (GCP) and Amazon Web Services (AWS).
  • Clear and consistent communication in a distributed environment.
  • Own and operate the structure of the Data Warehouse, ensuring reliable ingestion of mission-critical data and reliable builds of our pipelines.
  • Build and maintain self-service tools and extensible datasets for organizational insights.
  • Identify and execute projects addressing scalability issues, automating workflows, and simplifying datasets for analytics.
  • Ensure data quality through tests, observability, RFC reviews, and guidance in data modeling.
  • Participate in business hours-only on-call rotation to maintain system uptime and quality.
  • Cultivate a culture of data literacy and data-driven decision making.

AWSPythonSQLGCPData engineeringData visualizationDebugging

Posted about 2 months ago
Apply
Apply
πŸ”₯ Sr. Data Engineer
Posted about 2 months ago

πŸ“ United States

πŸ’Έ 150000.0 - 165000.0 USD per year

πŸ” Healthcare

🏒 Company: TranscarentπŸ‘₯ 251-500πŸ’° $126,000,000 Series D 11 months agoPersonal HealthHealth CareSoftware

  • You are entrepreneurial and mission-driven and can present your ideas with clarity and confidence.
  • You are a high-agency person. You refuse to accept undue constraints and the status quo and will not rest until you figure things out.
  • Advanced expertise in python and dbt for data pipelines.
  • Advanced working SQL knowledge and experience working with relational databases.
  • Experience building and optimizing big data pipelines, architectures, and data sets. A definite plus with healthcare experience.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
  • A successful history of manipulating, processing, and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable β€˜big data’ data stores.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
  • Be a data champion and seek to empower others to leverage the data to its full potential.
  • Create and maintain optimal data pipeline architecture with high observability and robust operational characteristics.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal data extraction, transformation, and loading using SQL, python, and dbt from various sources.
  • Work with stakeholders, including the Executive, Product, Clinical, Data, and Design teams, to assist with data-related technical issues and support their data infrastructure needs.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

PythonSQLApache AirflowETLKafkaSnowflakeData engineering

Posted about 2 months ago
Apply
Apply
πŸ”₯ Sr. Data Engineer
Posted 3 months ago

πŸ“ USA, Canada, Mexico

🧭 Full-Time

πŸ’Έ 175000.0 USD per year

πŸ” Digital tools for hourly employees

🏒 Company: TeamSenseπŸ‘₯ 11-50πŸ’° Seed about 1 year agoInformation ServicesInformation TechnologySoftware

  • Bachelor's or Master's degree in Computer Science, Software Engineering, or a related technical field.
  • 7+ years of professional experience in software engineering including 5+ years of experience in data engineering.
  • Proven expertise in building and managing scalable data platforms.
  • Proficiency in Python.
  • Strong knowledge of SQL, data modeling, data migration and database systems such as PostgreSQL and MongoDB.
  • Exceptional problem-solving skills optimizing data systems.
  • As a Senior Data Engineer, your primary responsibility is to contribute to the design, development, and maintenance of a scalable and reliable data platform.
  • Analyze the current database and warehouse.
  • Design and develop scalable ETL/ELT pipelines to support data migration.
  • Build and maintain robust, scalable, and high-performing data platforms, including data lakes and/or warehouses.
  • Implement data engineering best practices and design patterns.
  • Guide design reviews for new features impacting data.

PostgreSQLPythonSQLETLMongoDBData engineeringData modeling

Posted 3 months ago
Apply
Apply

πŸ“ United States

πŸ” Blood product donation and collection

  • Experience in data engineering.
  • Comfortable working with key stakeholders.
  • Enjoy delivering simple solutions to complex problems.
  • Ability to work in an agile environment.
  • Plan, deploy, and maintain data pipelines from internal databases and third-party SaaS applications.
  • Collaborate with key stakeholders and other functions to address data needs.
  • Deliver simple solutions to complex problems in an agile environment.

AWSPostgreSQLPythonSQLApache AirflowCloud ComputingETLData engineering

Posted 3 months ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 115000.0 - 145000.0 USD per year

πŸ” Media and Entertainment

  • 5+ years of relevant experience
  • Experience in Data Modeling, Data Quality, ETL
  • 2+ years with AWS tech stack
  • Experience with Python, Spark, and Scala
  • Build data pipelines for internal & external datasets
  • Educate business partners on architecture and capabilities
  • Write documentation and architecture diagrams

AWSPythonSQLETLSnowflakeSparkLinuxScalaData modeling

Posted 4 months ago
Apply
Apply
πŸ”₯ Sr. Data Engineer
Posted 4 months ago

πŸ“ United States

🧭 Full-Time

πŸ’Έ 115000.0 - 145000.0 USD per year

πŸ” Data Engineering

  • 5+ years experience in data engineering
  • Strong knowledge of ETL/ELT development principles
  • Deep experience with Python and SQL
  • Experience with Airflow or similar orchestration engines
  • Understanding of CI/CD principles in data engineering
  • Design and scale data pipelines across various source systems
  • Implement data modeling and warehousing principles
  • Collaborate with teams to understand data requirements
  • Interface with technology teams for data extraction and transformation
  • Create documentation for products

PythonSQLApache AirflowCloud ComputingETLMachine LearningData engineeringCI/CDData modeling

Posted 4 months ago
Apply