Apply

Data Engineer

Posted about 20 hours agoViewed

View full description

πŸ’Ž Seniority level: Senior, 5+ years

🏒 Company: PortlessπŸ‘₯ 11-50πŸ’° Series A about 2 months agoLogisticsE-CommerceTransportation

⏳ Experience: 5+ years

Requirements:
  • 5+ years of experience in data engineering, preferably in supply chain or logistics.
  • Expertise in Google Cloud Platform (GCP), especially BigQuery and Dataflow.
  • Proficiency in Python and JavaScript for serverless data processing.
  • Experience with MLOps and deploying machine learning models.
  • Strong knowledge of ETL/ELT processes, data modeling, and orchestration.
  • Excellent problem-solving skills and ability to work in a fast-paced environment.
Responsibilities:
  • Design, develop, and maintain data pipelines using BigQuery and Dataflow.
  • Build and manage MLOps workflows to support machine learning models.
  • Architect and implement serverless data solutions using JavaScript and Python.
  • Ensure data quality, integrity, and governance across platforms.
  • Collaborate with cross-functional teams to support analytics, reporting, and operational insights.
  • Provide technical leadership and mentor junior engineers.
  • Stay up to date with emerging data engineering and cloud technologies.
Apply

Related Jobs

Apply
πŸ”₯ Data Engineer
Posted about 6 hours ago

πŸ’Έ 120000.0 - 165500.0 USD per year

πŸ” Sports Analytics

🏒 Company: Swish AnalyticsπŸ‘₯ 1-10πŸ’° $6,909,110 Series B almost 6 years agoBig DataFantasy SportsPredictive AnalyticsMachine LearningAnalyticsSports

  • BS/BA degree in Mathematics, Computer Science, or related STEM field
  • Minimum of 5+ years of demonstrated experience writing production level code (Python)
  • Proficiency in Python and SQL (preferably MySQL); minimum of 5 years of experience
  • Demonstrated experience with Airflow
  • Demonstrated experience with Kubernetes
  • Experience building end-to-end ETL pipelines
  • Experience utilizing REST APIs
  • Experience with version control (git), continuous integration and deployment, shell scripting, and cloud-computing infrastructures (AWS)
  • Experience with web scraping and cleaning unstructured data
  • Knowledge of data science and machine learning concepts
  • Knowledge of sports betting
  • Must have knowledge and understanding of NBA OR NFL and the ability use your knowledge of the sport to inform your work with complex datasets
  • Architect low-latency, real-time analytics systems including raw data collection, feature development and endpoint production
  • Build new sports betting data products and predictions offerings
  • Integrate large and complex real-time datasets into new consumer and enterprise products
  • Develop production-level predictive analytics into enterprise-grade APIs
  • Support production systems and help triage issues during live sporting events
  • Contribute to the design and implementation of new, fully-automated sports data delivery frameworks
Posted about 6 hours ago
Apply
Apply

πŸ“ AL, AR, AZ, CA (exempt only), CO, CT, FL, GA, ID, IL, IN, IA, KS, KY, MA, ME, MD, MI, MN, MO, MT, NC, NE, NJ, NM, NV, NY, OH, OK, OR, PA, SC, SD, TN, TX, UT, VT, VA, WA, WI

🧭 Full-Time

πŸ” Insurance

🏒 Company: Kin Insurance

  • Depth of experience in modern big data environments.
  • Advanced knowledge and experience with SQL and Python is expected.
  • Insurance domain knowledge.
  • Design and develop data pipelines and modeling raw data for downstream ingestion.
  • Mentor and guide data engineers on your team and across the organization, while collaborating with other engineers, product managers, analysts, and stakeholders.
  • Lead a cross-functional project team with members from AppEng, DataEng, BI, and business stakeholders.

AWSPythonSQLApache AirflowETLCross-functional Team LeadershipData engineeringProblem SolvingMentoringDocumentationComplianceData visualizationData modelingData management

Posted about 8 hours ago
Apply
Apply
πŸ”₯ Data Engineer
Posted about 9 hours ago

πŸ“ United States

🧭 Contract

πŸ” Biotechnology

🏒 Company: AvomindπŸ‘₯ 11-50EmploymentHuman ResourcesRecruiting

  • Strong experience in data engineering and cloud platforms (preferably GCP).
  • Proficiency in programming languages like Python, SQL, and shell scripting.
  • Familiarity with data catalog tools (e.g., DataHub, Apache Atlas) and metadata management.
  • Experience with building and maintaining scalable ETL pipelines using orchestration tools (Dagster, Airflow).
  • Understanding of API development and integration.
  • Knowledge of data governance and data quality principles.
  • Background in biological or scientific data is a plus but not mandatory.
  • Design, build, and maintain ETL/ELT pipelines to process and transform data efficiently.
  • Develop and optimise scalable data architectures in the cloud.
  • Implement and maintain data cataloging solutions to ensure discoverability and governance.
  • Build APIs and integrations for seamless data exchange across systems.
  • Perform data quality checks and implement automated testing frameworks to ensure data accuracy and reliability.
  • Collaborate with teams to build self-service systems and promote data democratisation.
  • Document and maintain data engineering processes and best practices

PythonSQLETLGCPAirflowAPI testingData engineeringData modeling

Posted about 9 hours ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 10 hours ago

πŸ” Software Development

  • Degree in a related field and 7+ years of data engineering experience.
  • Proficiency in tools and languages, such as AWS, dbt, Snowflake, Git, R, Python, SQL, SQL Server, and Snowflake.
  • Strong project management skills and the ability to communicate complex concepts effectively.
  • Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
  • Adept at conveying complex data insights through a robust understanding of data management systems, warehouse methodologies, data quality standards, data modeling techniques, governance protocols, and advanced analytics.
  • Familiarity with agile work environments and scrum ceremonies.
  • Strong business acumen and experience in aligning data initiatives with business objectives.
  • Contribute to the strategic vision for data engineering and participate in the architectural design and development of new and complex data solutions, focusing on scalability, performance, and hands-on implementation.
  • Design and implement new data systems and infrastructure to ensure the reliability and scalability of data systems by actively contributing to day-to-day engineering tasks.
  • Influence key decisions regarding the data technology stack, infrastructure, and tools while actively engaging in hands-on engineering efforts in the creation and deployment of new data architectures and workflows.
  • Set coding standards and best practices for the Data Engineering & Operations team, conducting and participating in code reviews to maintain high-quality, consistent code.
  • Work closely with database developers, software development, product management, and AI/ML developers to align data initiatives with Assent’s organizational goals.
  • Collaborate with team members to monitor progress, adjust priorities, and meet project deadlines and objectives.
  • Identify opportunities for internal process improvements, including automating manual processes and optimizing data delivery.
  • Proactively support peers in continuous learning by providing technical guidance and training on data engineering development, analysis, and execution.
  • Be familiar with corporate security policies and follow the guidance set out by processes and procedures of Assent.
Posted about 10 hours ago
Apply
Apply

πŸ“ London, Greece

πŸ” Data Science

🏒 Company: VML Enterprise Solutions

  • Proficiency with Python and SQL programming languages.
  • Hands-on experience with cloud platforms like AWS, GCP, or Azure, and familiarity with big data technologies such as Hadoop or Spark.
  • Experience working with relational databases and NoSQL databases.
  • Strong knowledge of data structures, data modelling, and database schema design.
  • Experience in supporting data science workloads and working with both structured and unstructured data.
  • Familiarity with containerization technologies, such as Docker or Kubernetes.
  • Experience with data visualization tools, such as Tableau or Power BI is a plus.
  • Collaborate closely with data scientists, architects, and other stakeholders to understand and implement business requirements.
  • Provide data engineering support for AI model development and deployment, ensuring data scientists have access to the data they need in the format they need it.
  • Implement and optimize data transformations and ETL/ELT processes, using appropriate data engineering tools.
  • Work with a variety of databases and data warehousing solutions to store and retrieve data efficiently.
  • Implement monitoring, troubleshooting, and maintenance procedures for data pipelines to ensure the high quality of data and optimize performance.
  • Participate in the creation and ongoing maintenance of documentation, including data dictionaries, data catalogs, data flow diagrams, and process documentation.

AWSDockerPythonSQLCloud ComputingETLGCPHadoopKubernetesAzureData engineeringData scienceData StructuresNosqlSparkData visualizationData modeling

Posted about 13 hours ago
Apply
Apply

πŸ“ India

πŸ” Software Development

🏒 Company: JobgetherπŸ‘₯ 11-50πŸ’° $1,493,585 Seed about 2 years agoInternet

  • Hands-on experience with GIS, location-based data ingestion pipelines, and AWS services such as EC2, S3, and Lambda.
  • Proficiency in Python or Java for orchestration of data pipelines.
  • Strong experience in writing analytical queries using SQL.
  • Familiarity with Airflow, Docker, and version control with Git.
  • Design and maintain data ingestion pipelines that integrate and process large datasets from multiple sources.
  • Build infrastructure for ETL (Extract, Transform, Load) processes, utilizing AWS technologies such as EC2, S3, EMR, and Lambda.
  • Collaborate with Product, Analytics, and Client Services teams to resolve data-related technical issues and ensure data infrastructure needs are met.
  • Write and optimize SQL queries to extract and analyze data effectively.
  • Participate in code reviews, ensure quality control, and test applications before deployment.
  • Contribute to the improvement of the location-based platform by proposing and implementing innovative solutions.

AWSDockerPythonSQLCloud ComputingETLGitJavaAirflowData engineering

Posted 1 day ago
Apply
Apply

🧭 Contract

🏒 Company: Kaizen AnalytixπŸ‘₯ 11-50Information ServicesAnalyticsSoftware

  • 4 to 6 years of experience
  • GCP services
  • DevOps work – Building Ci/CD Pipeline with Jenkins & Gitlab.
  • Configuring Google Cloud Platform services
  • Managing data storage and processing
  • Designing and deploying data pipelines using GCP services
  • Developing data ingestion and transformation processes
  • Establishing and managing data storage solutions using GCP services.
  • DevOps work – Building Ci/CD Pipeline with Jenkins & Gitlab.
Posted 1 day ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 150363.0 - 180870.0 USD per year

πŸ” Software Development

  • At least a Bachelors Degree or foreign equivalent in Computer Science, Computer Engineering, Electrical and Electronics Engineering, or a closely related technical field, and at least five (5) years of post-bachelor’s, progressive experience writing shell scripts; validating data; and engaging in data wrangling.
  • Experience must include at least three (3) years of experience debugging data; transforming data into Microsoft SQL server; developing processes to import data into HDFS using Sqoop; and using Java, UNIX Shell Scripts, and Python.
  • Experience must also include at least one (1) year of experience developing Hive scripts for data transformation on data lake projects; converting Hive scripts to Pyspark applications; automating in Hadoop; and implementing CI/CD pipelines.
  • Design, develop, test, and implement Big Data technical solutions.
  • Recommend the right technologies and solutions for a given use case, from the application layer to infrastructure.
  • Lead the delivery of compiling and installing database systems, integrating data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures.
  • Drive solution architecture and perform deployments of data pipelines and applications.
  • Author DDL and DML SQL spanning technical tacks.
  • Develop data transformation code and highly complex provisioning pipelines.
  • Ingest data from relational databases.
  • Execute automation strategy.

AWSPythonSQLHadoopJavaKafkaSnowflakeData engineeringSparkCI/CDScalaScriptingDebugging

Posted 1 day ago
Apply
Apply
πŸ”₯ Data Engineer-I
Posted 2 days ago

πŸ“ USA

πŸ” Healthcare

🏒 Company: Innovaccer Inc.

  • SQL knowledge
  • ETL/ELT/Data pipeline knowledge
  • Python knowledge
  • Powershell / Bash knowledge
  • Excellent problem-solving and effective communication skills
  • Self-motivation, integrity and honesty
  • Collaborate with team, management, departments using virtual tools
  • Run Production data pipelines/processes, ensure the integrity of the data, and send out deliverables based on requirement/runbook documentation
  • Coordinate with the various technical teams to resolve issues/bugs/optimize said production processes
  • Coordinate with internal client facing team members to communicate the status of deliverables
  • Help develop/improve technical documentation to guide future software development projects and operations
  • Dedicated time to explore building out tech stack and capabilities where there are applicable use cases
  • Provide critical thinking, technical innovation, and extra attention to detail by serving as a trusted team member and peer code reviewer
  • Assists with external client communications when deliverables or receivables do not meet technical or project requirements, ensuring timely resolution and alignment

PythonSQLBashETLMicrosoft AzurePostgresData modeling

Posted 2 days ago
Apply
Apply

🧭 Full-Time

πŸ” Consulting

🏒 Company: P3 Adaptive

  • US Citizenship or Green Card (We don’t sponsor work visas)
  • Strong written and spoken English
  • Proven time management skills
  • Proven ability to connect with a diverse range of technical and non-technical stakeholders
  • Experienced in Project Management
  • Intermediate or better knowledge of T-SQL for DDL and DML applications.
  • Experience with Azure Active Directory Security Groups and Role-Based Access Controls
  • Experience with SSIS, SSAS preferred
  • Experience with PowerShell and Python preferred
  • Insatiable curiosity and love of learning
  • Support the execution of Power BI projects, working alongside expert Principal Consultants and Solution Architects.
  • Create Data Storage Solutions with SQL Server and Data Lakes.
  • Develop ETL Pipelines with Azure Data Factory.
  • Provision Azure Subscriptions and Resources.
  • Develop Automation Solutions using languages such as PowerShell and Python
Posted 3 days ago
Apply

Related Articles

Posted about 1 month ago

Why remote work is such a nice opportunity?

Why is remote work so nice? Let's try to see!

Posted 7 months ago

Insights into the evolving landscape of remote work in 2024 reveal the importance of certifications and continuous learning. This article breaks down emerging trends, sought-after certifications, and provides practical solutions for enhancing your employability and expertise. What skills will be essential for remote job seekers, and how can you navigate this dynamic market to secure your dream role?

Posted 8 months ago

Explore the challenges and strategies of maintaining work-life balance while working remotely. Learn about unique aspects of remote work, associated challenges, historical context, and effective strategies to separate work and personal life.

Posted 8 months ago

Google is gearing up to expand its remote job listings, promising more opportunities across various departments and regions. Find out how this move can benefit job seekers and impact the market.

Posted 8 months ago

Learn about the importance of pre-onboarding preparation for remote employees, including checklist creation, documentation, tools and equipment setup, communication plans, and feedback strategies. Discover how proactive pre-onboarding can enhance job performance, increase retention rates, and foster a sense of belonging from day one.