Apply

Staff Data Engineer

Posted about 13 hours agoViewed

View full description

💎 Seniority level: Staff, 10+ years

📍 Location: Spain

🔍 Industry: AI, Data Engineering

🏢 Company: Clarity AI

🗣️ Languages: English

⏳ Experience: 10+ years

🪄 Skills: PythonSQLCloud ComputingETLData engineeringMicroservicesData modeling

Requirements:
  • 10+ years in data architecture or engineering
  • Strong SQL skills
  • Experience with big data technologies
  • Background in architectural design for large-scale applications
  • Proficient in Python
  • Understanding of microservices and data pipeline design
Responsibilities:
  • Design and oversee data architecture solutions
  • Develop and implement architectural patterns for data validation and storage
  • Transform raw data into high-quality data models
  • Implement data quality checks
  • Work with engineering and product teams to understand business requirements
  • Lead initiatives to improve data practices
  • Continuously evolve data architecture
Apply

Related Jobs

Apply
🔥 Staff Data Engineer (m/f/d)
Posted about 1 month ago

📍 Europe

🧭 Full-Time

🔍 Supply Chain Risk Analytics

🏢 Company: Everstream Analytics👥 251-500💰 $50,000,000 Series B almost 2 years agoProductivity ToolsArtificial Intelligence (AI)LogisticsMachine LearningRisk ManagementAnalyticsSupply Chain ManagementProcurement

  • Deep understanding of Python, including data manipulation and analysis libraries like Pandas and NumPy.
  • Extensive experience in data engineering, including ETL, data warehousing, and data pipelines.
  • Strong knowledge of AWS services, such as RDS, Lake Formation, Glue, Spark, etc.
  • Experience with real-time data processing frameworks like Apache Kafka/MSK.
  • Proficiency in SQL and NoSQL databases, including PostgreSQL, Opensearch, and Athena.
  • Ability to design efficient and scalable data models.
  • Strong analytical skills to identify and solve complex data problems.
  • Excellent communication and collaboration skills to work effectively with cross-functional teams.
  • Manage and grow a remote team of data engineers based in Europe.
  • Collaborate with Platform and Data Architecture teams to deliver robust, scalable, and maintainable data pipelines.
  • Lead and own data engineering projects, including data ingestion, transformation, and storage.
  • Develop and optimize real-time data processing pipelines using technologies like Apache Kafka/MSK or similar.
  • Design and implement data lakehouses and ETL pipelines using AWS services like Glue or similar.
  • Create efficient data models and optimize database queries for optimal performance.
  • Work closely with data scientists, product managers, and engineers to understand data requirements and translate them into technical solutions.
  • Mentor junior data engineers and share your expertise. Establish and promote best practices.

AWSPostgreSQLPythonSQLETLApache KafkaNosqlSparkData modeling

Posted about 1 month ago
Apply