Software Engineer, Data Infrastructure @ Docker

Posted 2 days agoInactiveViewed
United StatesFull-TimeData Infrastructure
Company:Docker
Location:United States
Languages:English
Seniority level:Senior, 2+ years
Experience:2+ years
Skills:
AWSPythonSQLApache AirflowGitKotlinSnowflakeData engineeringGoCI/CDSoftware Engineering
Requirements:
2+ years of software engineering experience, preferably with a focus on data engineering or analytics systems. Experience with a major cloud platform (AWS, GCP, or Azure), including basic data services (S3, GCS, etc.). Proficiency with SQL and experience with a cloud data warehouse (e.g., Snowflake, Redshift, BigQuery). Familiarity with data transformation tools (e.g., DBT) and modern BI platforms (e.g., Sigma). Familiarity with workflow orchestration tools (e.g., Apache Airflow, Dagster). Proficiency in Python, Go, Kotlin and other programming languages used in data engineering. Familiarity with version control (Git) and modern software development practices (CI/CD). Basic understanding of data warehousing concepts (e.g., dimensional modeling) and analytics architectures. An eagerness to learn about distributed data systems and stream processing concepts. Foundational knowledge of data quality and testing principles. Strong communication and collaboration skills. Ability to take direction and work effectively as part of a team. A proactive attitude toward problem-solving and self-improvement. Experience in an internship or junior role at a technology company (Preferred). Knowledge of container technologies (Docker, Kubernetes) (Preferred). Experience with version control (Git) and CI/CD practices (Preferred). Advanced degree in Computer Science, Data Engineering, or a related technical field (Preferred).
Responsibilities:
Contribute to the design and implementation of highly scalable data infrastructure leveraging Snowflake, AWS, Airflow, DBT, and Sigma. Implement and maintain end-to-end data pipelines supporting batch & realtime analytics. Design, build, and maintain robust data processing systems. Implement data transformations and modeling using DBT. Develop and maintain data orchestration workflows using Apache Airflow. Assist with optimizing Snowflake performance and cost efficiency. Contribute to building data APIs and services. Work with Product, Engineering, and Business teams to understand data requirements. Support Data Scientists and Analysts by providing access to reliable, high-quality data. Collaborate with business teams to deliver and maintain accurate reporting and operational dashboards. Engage with Security and Compliance teams to support data governance implementation. Assist with monitoring, alerting, and incident response for critical data systems. Support the implementation of data quality frameworks and automated testing. Participate in performance optimization and cost management initiatives. Troubleshoot and resolve technical issues affecting data availability and accuracy. Proactively learn technical skills, system design, and data engineering best practices. Participate in technical design reviews and provide feedback on documentation. Actively contribute to team knowledge sharing and documentation efforts.
Similar Jobs:
Posted 1 day ago
WashingtonFull-TimeData Science
Senior Data Scientist (Remote from Washington)
Company:
Posted 1 day ago
USAFull-TimeVeterinary Software
AI Integrations Staff Engineer
Company:Vetcove
Posted 12 days ago
United StatesFull-TimeDigital Health
Product Data Analyst