Apply

Data Engineer

Posted 4 months agoViewed

View full description

πŸ’Ž Seniority level: Senior, 3 or more years

πŸ” Industry: Software Development

🏒 Company: SumerSports

πŸ—£οΈ Languages: English

⏳ Experience: 3 or more years

Requirements:
  • Proven experience as a data engineer, preferably with at least 3 or more years of relevant experience.
  • Some exposure designing cloud native solutions and implementations with Kubernetes
  • Experience with Airflow or similar pipeline orchestration tools.
  • Strong background in Python programming.
  • Demonstrated experience in collaborating with Data Science and Engineering teams in a production environment.
  • Firm grasp on SQL and knowledge of relational data modeling schemas.
  • Preference for experience with Databricks or Spark
  • Familiarity with modern data stack design and strong opinions on data lifecycle management, testing, and governance.
  • Effective communicator with the ability to advocate best data practices across the organization.
  • Passion for coaching and working in a team-oriented environment.
  • Commitment to improving developer experience and productivity.
  • Experience with distributed systems, microservices architecture, and cloud platforms (e.g., AWS, Azure, Google Cloud).
  • Excellent problem-solving skills and the ability to analyze and debug complex issues.
  • Strong communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams.
Responsibilities:
  • Develop and maintain data pipelines using Databricks, Airflow, or similar orchestration systems.
  • Write robust, maintainable code in Python, following best practices and coding standards.
  • Design and implement solutions using Kubernetes, ensuring high availability and scalability.
  • Gather product data requirements and implement solutions to ingest, process, and serve that data to applications at scale
  • Collaborate closely with Data Science and Engineering teams to build production-ready applications, optimizing for performance and scale.
  • Cultivate data from a wide range of sources for use by our data scientists and maintain documentation for data.
  • Design a modern data stack, with a keen focus on providing a platform for data scientists and ML engineers to develop production ready models and pipelines.
  • Work across Data Science and Engineering teams to ensure effective collaboration and communication.
  • Actively participate in code reviews, coaching, and mentoring within cross-functional teams.
  • Enhance the developer experience through improved tooling and reduced toil.
  • Stay informed of industry advancements and advocate for the adoption of new technologies and best practices.
Apply

Related Jobs

Apply

πŸ“ Mexico

🧭 Full-Time

πŸ” IT

🏒 Company: Rehire

  • 8+ years of experience in data engineering. (MANDATORY).
  • Advanced Conversational English Skills. (C1/C2 Level REQUIRED)
  • Proficiency in Python, Scala, or Java within the data ecosystem.
  • Strong knowledge of SQL and NoSQL databases.
  • Experience with Snowflake, AWS, and distributed data processing systems.
  • Familiarity with data pipeline orchestration tools like Airflow.
  • Experience designing and implementing cross-cloud data pipelines (AWS, Snowflake, Databricks).
  • Design, develop, and optimize scalable data pipelines.
  • Build and manage large datasets for analysis and processing.
  • Implement distributed data processing systems to handle large-scale data.
  • Work with SQL and NoSQL databases for efficient data storage and retrieval.
  • Collaborate with technical teams to design cloud-based data architectures.

AWSPythonSQLETLSnowflakeData engineeringData modelingData management

Posted about 5 hours ago
Apply
Apply
πŸ”₯ Staff Data Engineer
Posted about 7 hours ago

πŸ“ Spain

🧭 Full-Time

πŸ” AI, Data Engineering

🏒 Company: Clarity AI

  • 10+ years in data architecture or engineering
  • Strong SQL skills
  • Experience with big data technologies
  • Background in architectural design for large-scale applications
  • Proficient in Python
  • Understanding of microservices and data pipeline design
  • Design and oversee data architecture solutions
  • Develop and implement architectural patterns for data validation and storage
  • Transform raw data into high-quality data models
  • Implement data quality checks
  • Work with engineering and product teams to understand business requirements
  • Lead initiatives to improve data practices
  • Continuously evolve data architecture

PythonSQLCloud ComputingETLData engineeringMicroservicesData modeling

Posted about 7 hours ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 7 hours ago

πŸ“ Madrid, Barcelona

🏒 Company: Clarity AI

  • 5+ years in data architecture, data engineering, or a related role, with hands-on experience in data modeling, schema design, and cloud-based data systems
  • Strong Python and SQL skills and proficiency with distributed data stores
  • Proficiency in schema design, data modeling, and building data products
  • Proficient in at least one major programming language, preferably Python, with a software engineering mindset for writing clean, maintainable code
  • Deep understanding of architectural principles in microservices, distributed systems, and data pipeline design
  • Familiarity with containerized environments, public/private API integrations, and security best practices
  • Strong communication and interpersonal skills, with experience working cross-functionally
  • Proven ability to guide teams and drive complex projects to successful completion
  • Self-starter, able to take ownership and initiative, with high energy and stamina
  • Decisive and action-oriented, able to make rapid decisions even when they are short of information
  • Highly motivated, independent and deeply passionate about sustainability and impact
  • Excellent oral and written English communication skills (minimum C1 level-proficient user)
  • Transforming raw data into intuitive, high-quality data models
  • Implementing and monitor data quality checks
  • Collaborating across functions
  • Working closely with engineering and product teams
  • Acting as a bridge between technical and non-technical stakeholders
  • Leading initiatives to improve data practices
  • Guiding the team in experimenting with new tools and technologies
  • Continuously evolving the data architecture
  • Applying a pragmatic approach to performance metrics and scaling decisions
  • Implementing performance metrics to monitor system health
  • Maintaining comprehensive documentation of data systems, processes, and best practices

DockerPythonSQLApache AirflowCloud ComputingKubernetesAlgorithmsData engineeringData StructuresPostgresREST APICommunication SkillsAnalytical SkillsCI/CDProblem SolvingAgile methodologiesMicroservicesData visualizationData modelingData analyticsData management

Posted about 7 hours ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 7 hours ago

πŸ“ Tallinn, Harju County, Estonia. Barcelona, Catalonia, Spain. Lisbon, Lisbon, Portugal. Bucharest, Bucharest, Romania. Cluj-Napoca, Cluj County, Romania

🧭 Full-Time

πŸ” Influencer Marketing

🏒 Company: Modash

  • Experienced as a Data Engineer
  • Used to working with unstructured data
  • Experienced working in Spark environment
  • Experience managing data workflows with a distributed workflow manager (i.e.: Airflow, AWS Step functions)
  • Hands-on experience in writing code in Python and SQL
  • Experience in building and maintaining ETL & ELT data pipelines
  • Familiarity with AWS ecosystem: DynamoDB, Glue, EMR, Kinesis, SQS, Lambda, ECS
  • Hands-on experience with SQL/NoSQL database design
  • Take ownership of building the best data platform for influencer marketing.
  • Design, develop, and test data pipelines in order to collect, process and store data.
  • Collaborate with your colleagues, from pair programming to mob reviewing. We are all for one.
  • You are expected to take part in every area of the data product, from brainstorming, roadmap planning, implementing, and reviewing, to releasing. Working with the CEO, CTO, engineers, sales, and customers.
  • Help us choose the best technical direction by providing well-reasoned ideas on which frameworks and tools to use.
  • Implement systems to monitor data quality for optimized accuracy and clarity.
  • Teach and be taught through code reviews and feedback.

AWSPythonSQLApache AirflowDynamoDBETLData engineeringSpark

Posted about 7 hours ago
Apply
Apply
πŸ”₯ Principal Data Engineer
Posted about 9 hours ago

πŸ“ AL, AR, AZ, CA (exempt only), CO, CT, FL, GA, ID, IL, IN, IA, KS, KY, MA, ME, MD, MI, MN, MO, MT, NC, NE, NJ, NM, NV, NY, OH, OK, OR, PA, SC, SD, TN, TX, UT, VT, VA, WA, and WI

🧭 Full-Time

πŸ” Insurance

🏒 Company: Kin Insurance

  • 10+ years of experience in designing & architecting data systems, warehousing and/or ML Ops platforms
  • Proven experience in the design and architecture of large-scale data systems, including lakehouse and machine learning platforms, using modern cloud-based tools
  • Can communicate effectively with executives and team
  • Architect and implement solutions using Databricks for advanced analytics, data processing, and machine learning workloads
  • Expertise in data architecture and design, for both structured and unstructured data. Expertise in data modeling across transactional, BI and DS usage models
  • Fluency with modern cloud data stacks, SaaS solutions, and evolutionary architecture, enabling flexible, scalable, and cost-effective solutions
  • Expertise with all aspects of data management: data governance, data mastering, metadata management, data taxonomies and ontologies
  • Expertise in architecting and delivering highly-scalable and flexible, cost effective, cloud-based enterprise data solutions
  • Proven ability to develop and implement data architecture roadmaps and comprehensive implementation plans that align with business strategies
  • Experience working data integration tools and Kafka for real-time data streaming to ensure seamless data flow across the organization
  • Expertise in dbt for transformations, pipelines, and data modeling
  • Experience with data analytics, BI, data reporting and data visualization
  • Experience with the insurance domain is a plus
  • Lead the overall data architecture design, including ingestion, storage, management, and machine learning engineering platforms.
  • Implement the architecture and create a vision for how data will flow through the organization using a federated approach.
  • Manage data governance across multiple systems: data mastering, metadata management, data definitions, semantic-layer design, data taxonomies, and ontologies.
  • Architect and deliver highly scalable, flexible, and cost-effective enterprise data solutions that support the development of architecture patterns and standards.
  • Help define the technology strategy and roadmaps for the portfolio of data platforms and services across the organization.
  • Ensure data security and compliance, working within industry regulations.
  • Design and document data architecture at multiple levels across conceptual, logical, and physical views.
  • Provide β€œhands-on” architectural guidance and leadership throughout the entire lifecycle of development projects.
  • Translate business requirements into conceptual and detailed technology solutions that meet organizational goals.
  • Collaborate with other architects, engineering directors, and product managers to align data solutions with business strategy.
  • Lead proof-of-concept projects to test and validate new tools or architectural approaches.
  • Stay current with industry trends, vendor product offerings, and evolving data technologies to ensure the organization leverages the best available tools.
  • Cross-train peers and mentor team members to share knowledge and build internal capabilities.

AWSLeadershipPythonSQLApache AirflowCloud ComputingETLKafkaKubernetesMachine LearningSnowflakeAlgorithmsData engineeringData scienceData StructuresREST APIPandasSparkTensorflowCommunication SkillsAnalytical SkillsCollaborationCI/CDProblem SolvingMentoringTerraformMicroservicesJSONData visualizationData modelingData analyticsData management

Posted about 9 hours ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 13 hours ago

πŸ“ Germany, Austria, Italy, Spain, Portugal

🧭 Full-Time

πŸ” Digital Solutions for Financial and Real Estate Industries

🏒 Company: PriceHubbleπŸ‘₯ 101-250πŸ’° Non-equity Assistance about 3 years agoArtificial Intelligence (AI)PropTechBig DataMachine LearningAnalyticsReal Estate

  • 3+ years experience building and maintaining production data pipelines
  • Proficient in working with geospatial data (bonus)
  • Work with backend engineers and data scientists to turn raw data into trusted insights
  • Navigate cost-value trade-offs to deliver value to customers
  • Develop solutions that work in over 10 countries
  • Lead a project from concept to launch
  • Drive the team to deliver high-quality products, services, and processes
  • Improve the performance, data quality, and cost-efficiency of data pipelines
  • Maintain and monitor the data systems

PostgreSQLPythonSQLApache AirflowETLData engineering

Posted about 13 hours ago
Apply
Apply
πŸ”₯ Junior Data Engineer
Posted about 14 hours ago

πŸ“ United States, Canada

🧭 Full-Time

πŸ’Έ 90000.0 - 105000.0 CAD per year

πŸ” Blockchain Infrastructure

🏒 Company: FigmentπŸ‘₯ 11-50HospitalityTravel AccommodationsArt

  • At least 1 year of IT experience
  • Proficiency in SQL
  • Experience with a programming language (ideally Python)
  • Strong communication and written skills
  • Experience in troubleshooting data quality issues
  • Skills in data analysis and visualization
  • Develop and maintain dashboards and reports
  • Investigate new chains for data collection
  • Review ingested data for quality issues
  • Automate manual processes
  • Collaborate with internal teams for data solutions

PythonSQLCloud ComputingData AnalysisGitSnowflakeData engineeringTroubleshootingData visualization

Posted about 14 hours ago
Apply
Apply

πŸ“ LATAM

🧭 Full-Time

πŸ” Healthtech

🏒 Company: UrrlyπŸ‘₯ 1-10Artificial Intelligence (AI)Business DevelopmentSalesInformation Technology

  • Proficiency in SQL and Python
  • Strong experience with PostgreSQL
  • Experience working with APIs and cloud-based ETL processes using Airflow
  • Power BI experience is a huge advantage
  • Design, develop, and maintain ETL pipelines
  • Build and manage cloud-based workflows using Airflow
  • Integrate data from multiple sources using APIs
  • Collaborate with cross-functional teams to support data-driven decision-making
  • Create insightful reports and dashboards using Power BI

PostgreSQLPythonSQLApache AirflowCloud ComputingETL

Posted 2 days ago
Apply
Apply

πŸ“ Vietnam

🧭 Full-Time

πŸ” Software Development

🏒 Company: Employment HeroπŸ‘₯ 501-1000πŸ’° $166,333,052 Series F over 1 year agoManagement Information SystemsHuman ResourcesSaaSFinanceEmployee Benefits

  • At least 2 years of experience as a Data Engineer
  • Proficiency in Python, Java, Scala, and SQL
  • Experience with Hadoop and Spark
  • Familiarity with ETL tools
  • Cloud platform experience (AWS, Azure, GCP)
  • Design and implement data processing pipelines
  • Build and manage data warehouses
  • Establish data validation rules and cleaning processes
  • Collaborate with data scientists and analysts

AWSPythonSQLETLGCPHadoopJavaAzureSparkScalaData modeling

Posted 2 days ago
Apply
Apply
πŸ”₯ Data Engineer
Posted 2 days ago

🧭 Full-Time

πŸ” Software Development

🏒 Company: G2i Inc.

  • Strong functional programming background (e.g., Haskell, Clojure, Scala, F# or similar).
  • Experience in data engineering and ETL development.
  • Familiarity with AWS, Docker, and workflow systems.
  • Strong problem-solving skills with a proactive approach to development.
  • Excellent communication skills and ability to work in a high-expectation environment.
  • Design and implement modular ETL components for our code analysis platform.
  • Leverage functional programming principles to build scalable and maintainable solutions.
  • Optimize performance and handle edge cases in data processing workflows.
  • Work with AWS, Docker, and workflow orchestration tools to enhance system efficiency.
  • Collaborate with cross-functional teams to align development with business goals.
  • Maintain high coding standards and contribute to a culture of engineering excellence.
Posted 2 days ago
Apply

Related Articles

Posted 6 months ago

Insights into the evolving landscape of remote work in 2024 reveal the importance of certifications and continuous learning. This article breaks down emerging trends, sought-after certifications, and provides practical solutions for enhancing your employability and expertise. What skills will be essential for remote job seekers, and how can you navigate this dynamic market to secure your dream role?

Posted 6 months ago

Explore the challenges and strategies of maintaining work-life balance while working remotely. Learn about unique aspects of remote work, associated challenges, historical context, and effective strategies to separate work and personal life.

Posted 6 months ago

Google is gearing up to expand its remote job listings, promising more opportunities across various departments and regions. Find out how this move can benefit job seekers and impact the market.

Posted 6 months ago

Learn about the importance of pre-onboarding preparation for remote employees, including checklist creation, documentation, tools and equipment setup, communication plans, and feedback strategies. Discover how proactive pre-onboarding can enhance job performance, increase retention rates, and foster a sense of belonging from day one.

Posted 6 months ago

The article explores the current statistics for remote work in 2024, covering the percentage of the global workforce working remotely, growth trends, popular industries and job roles, geographic distribution of remote workers, demographic trends, work models comparison, job satisfaction, and productivity insights.