7+ years of hands-on experience. Bachelor's degree (or equivalent work experience) in Computer Science, Data Science, Software Engineering, or a related field. Strong understanding and ability to provide mentorship in the areas of data ETL processes and tools for designing and managing data pipelines Proficient with big data frameworks and tools like Apache Hadoop, Apache Spark, or Apache Kafka for processing and analyzing large datasets. Hands on experience with data serialization formats like JSON, Parquet and XML Consistently models and leads in best practices and optimization for scripting skills in languages like Python, Java, Scala, etc for automation and data processing. Proficient with database administration and performance tuning for databases like MySQL, PostgresSQL or NoSQL databases Proficient with containerization (e.g., Docker) and orchestration platforms (e.g., Kubernetes) for managing data applications. Experience with cloud platforms and data services for data storage and processing Consistently designs solutions and build data solutions that are highly automated, performant, with quality checks that provide data consistency and accuracy outcomes Experienced at actively managing large-scale data engineering projects, including planning, resource allocation, risk management, and ensuring successful project delivery and adjust style for all delivery methods (ie: Waterfall, Agile, POD, etc) Understands data governance principles, data privacy regulations, and experience implementing security measures to protect data Able to integrate data engineering pipelines with machine learning models and platforms Strong problem-solving skills to identify and resolve complex data engineering issues efficiently. Ability to work effectively in cross-functional teams, collaborate with data scientists, analysts, and stakeholders to deliver data solutions. Ability to lead and mentor junior data engineers, providing guidance and support in complex data engineering projects. Influential communication skills to effectively convey technical concepts to non-technical stakeholders and document data engineering processes. Models a mindset of continuous learning, staying updated with the latest advancements in data engineering technologies, and a drive for innovation.