Apply

Data Engineer

Posted 4 days agoViewed

View full description

💎 Seniority level: Senior, 5 years

📍 Location: United States, EST

💸 Salary: 103500.0 - 143500.0 USD per year

🔍 Industry: Public Health

🗣️ Languages: English

⏳ Experience: 5 years

🪄 Skills: AWSPythonSQLCloud ComputingETLAmazon Web ServicesData engineeringPostgresData modelingScriptingData management

Requirements:
  • Minimum of 5 years of relevant experience in data engineering.
  • Proficiency in programming languages commonly used in data engineering, such as Python, Java, Scala, or SQL. Candidate should be able to implement data automations within existing frameworks as opposed to writing one-off scripts.
  • Experience with large-scale projects using Amazon Web Services is required. Certification is preferred.
  • Strong technical writing skills for creating documentation, policies, and procedures.
  • Experience with project planning, including developing timelines, setting milestones, and managing resources.
  • Knowledge of data warehousing concepts and tools.
  • Experience with cloud computing platforms.
  • Experience with data security and data governance.
  • Experience regarding engineering best practices such as source control, automated testing, continuous integration and deployment, and peer review.
  • Expertise in data modeling, ETL (Extract, Transform, Load) processes, and data integration techniques.
  • Strong analytical thinking and problem-solving abilities.
  • Excellent verbal and written communication skills, including the ability to convey technical concepts to non-technical partners effectively.
  • Flexibility to adapt to evolving project requirements and priorities.
  • Outstanding interpersonal and teamwork skills; and the ability to develop productive working relationships with colleagues and partners.
  • Experience working in a virtual environment with remote partners and teams.
  • Proficiency in Microsoft Office.
Responsibilities:
  • Collaborate with data scientists, analysts, and other partners to understand their data needs and requirements, and to ensure that the data infrastructure supports the organization's goals and objectives.
  • Collaborate with cross-functional teams to understand data requirements and design scalable solutions that meet business needs.
  • Implement and maintain ETL processes to ensure the accuracy, completeness, and consistency of data.
  • Implement security measures to protect sensitive information.
  • Design and manage data storage systems, including relational databases, NoSQL databases, and data warehouses.
  • Knowledgeable about industry trends, best practices, and emerging technologies in data engineering, and incorporating the trends into the organization's data infrastructure.
  • Create and manage the systems and pipelines that enable efficient and reliable flow of data, including ingestion, processing, and storage.
  • Collect data from various sources, transforming and cleaning it to ensure accuracy and consistency. Load data into storage systems or data warehouses.
  • Optimize data pipelines, infrastructure, and workflows for performance and scalability.
  • Monitor data pipelines and systems for performance issues, errors, and anomalies, and implement solutions to address them.
  • Provide technical guidance to other staff.
  • Communicate effectively with partners at all levels of the organization to gather requirements, provide updates, and present findings.
Apply

Related Jobs

Apply
🔥 HR Data Engineer
Posted about 5 hours ago

📍 United States

💸 94800.0 - 151400.0 USD per year

🏢 Company: careers_gm

  • 5+ years of experience in HR Data Engineer role leading HR data engineering transformation and implementing data pipelines and data solutions in the People Analytics/HR domain
  • Very good understanding of HR data and HR employee lifecycle processes (talent acquisition, talent development, workforce planning, engagement, employee listening, external benchmarking etc.)
  • Very good understanding of HCM data architecture, models and data pipelines and experience designing and implementing data integrations and ETLs with Workday (RaaS, APIs)
  • Experience designing and automating data and analytics solutions that can provide insights and recommendations at scale
  • Proficiency in SQL, R/Python and ETL tools
  • Deep expertise in modern data platforms (particularly Databricks) and end-to-end data architecture (DLT Streaming Pipelines, Workflows, Notebooks, DeltaLake, Unity Catalog)
  • Experience with different authentication (Basic Auth, Oauth, etc.) and encryption methods and tools (GPG, Voltage, etc.)
  • Very strong data analytics skills and ability to leverage multiple internal and external data sources to enable data-driven insights and inform strategic talent decisions
  • Knowledge of compliance and regulatory requirements associated with data management
  • Experience working in environments requiring strict confidentiality and handling of sensitive data
  • Great communication skills and ability to explain complex technical concepts to non-technical stakeholders.
  • Degree with quantitative focus (e.g., Mathematics, Statistics) and/or degree in Human Resources is a plus
  • Design, develop, and maintain ETL/ELT processes for HR data from multiple systems including Workday to empower data-driven decision-making
  • Drive implementation of robust HR data models and pipelines optimized for reporting and analytics, ensuring data quality, reliability, and security for on-prem and Azure cloud solutions.
  • Develop pipelines and testing automation to ensure HR data quality and integrity across multiple data sources
  • Collaborate with People Analytics and HR business partners to understand data requirements and deliver reliable solutions. Collaborate with technical teams to build the best-in-class data environment and technology stack for People Analytics teams.
  • Ensure data integrity, quality, consistency, security, and compliance (e.g., GDPR, CCPA, HIPAA where applicable).
  • Design and implement secure processes for handling sensitive information in our data tech stack while maintaining appropriate access controls and confidentiality
  • Automate manual HR reporting and improve data accessibility through scalable data pipelines across the entire HR employee lifecycle
  • Troubleshoot and resolve data-related issues quickly and efficiently.
  • Contribute to HR tech stack evaluations and migrations, especially around data capabilities and API integrations.
  • Incorporate external data sources into internal datasets for comprehensive analysis
  • Manage and optimize platform architecture including Databricks environment configuration and performance optimization
  • Stay up to date with emerging trends and advancements in data engineering – both technically and in the HR and People Analytics/sciences domain

AWSPythonSQLApache AirflowETLAPI testingAzureData engineeringNosqlRESTful APIsData visualizationData modelingData analyticsData management

Posted about 5 hours ago
Apply
Apply
🔥 Senior Data Engineer
Posted about 22 hours ago

📍 Worldwide

🧭 Full-Time

🔍 Software Development

🏢 Company: Kit👥 11-50💰 over 1 year agoEducationFinancial ServicesApps

  • Strong command of SQL, including DDL and DML.
  • Proficient in Python
  • Strong understanding of DBMS internals, including an appreciation for platform-specific nuances.
  • A willingness to work with Redshift and deeply understand its nuances.
  • Familiarity with our key tools (Redshift, Segment, dbt, github)
  • 8+ years in data, with at least 3 years specializing in Data Engineering
  • Proven track record managing and optimizing OLAP clusters
  • Experience refactoring problematic data pipelines without disrupting business operations
  • History of implementing data quality frameworks and validation processes
  • Dive into our Redshift warehouse, dbt models, and workflows.
  • Evaluate the CRM data lifecycle, including source extraction, warehouse ingestion, transformation, and reverse ETL.
  • Refine and start implementing your design for source extraction and warehouse ingestion.
  • Complete the implementation of the CRM source extraction/ingestion project and use the learnings to refine your approach in preparation for other, similar initiatives including, but by no means limited to web traffic events and product usage logs.

PythonSQLETLGitData engineeringRDBMSData modelingData management

Posted about 22 hours ago
Apply
Apply

📍 United States of America

🏢 Company: IDEXX

  • Bachelor’s degree in Computer Science, Computer Engineering, Information Systems, Information Systems Engineering or a related field and 5 years of experience or Master’s degree in Computer Science, Computer Engineering, Information Systems, Information Systems Engineering or a related field and 3 years of related professional experience.
  • Advanced SQL knowledge and experience working with relational databases, including Snowflake, Oracle, Redshift.
  • Experience with AWS or Azure cloud platforms
  • Experience with data pipeline and workflow scheduling tools: Apache Airflow, Informatica.
  • Experience with ETL/ELT tools and data processing techniques
  • Experience in database design, development, and modeling
  • 3 years of related professional experience with object-oriented languages: Python, Java, and Scala
  • Design and implement scalable, reliable distributed data processing frameworks and analytical infrastructure
  • Design metadata and schemas for assigned projects based on a logical model
  • Create scripts for physical data layout
  • Write scripts to load test data
  • Validate schema design
  • Develop and implement node cluster models for unstructured data storage and metadata
  • Design advanced level Structured Query Language (SQL), data definition language (DDL) and Python scripts
  • Define, design, and implement data management, storage, backup and recovery solutions
  • Design automated software deployment functionality
  • Monitor structural performance and utilization, identifying problems and implements solutions
  • Lead the creation of standards, best practices and new processes for operational integration of new technology solutions
  • Ensures environments are compliant with defined standards and operational procedures
  • Implement measures to ensure data accuracy and accessibility, constantly monitoring and refining the performance of data management systems

AWSPythonSQLApache AirflowCloud ComputingETLJavaOracleSnowflakeAzureData engineeringScalaData modelingData management

Posted 1 day ago
Apply
Apply
🔥 Senior Data Engineer
Posted 2 days ago

📍 United States

🧭 Full-Time

💸 145000.0 - 200000.0 USD per year

🔍 Daily Fantasy Sports

🏢 Company: PrizePicks👥 101-250💰 Corporate about 2 years agoGamingFantasy SportsSports

  • 5+ years of experience in a data Engineering, or data-oriented software engineering role creating and pushing end-to-end data engineering pipelines.
  • 2+ years of experience acting as technical lead and providing mentorship and feedback to junior engineers.
  • Extensive experience building and optimizing cloud-based data streaming pipelines and infrastructure.
  • Extensive experience exposing real-time predictive model outputs to production-grade systems leveraging large-scale distributed data processing and model training.
  • Experience in most of the following: SQL/NoSQL databases/warehouses: Postgres, BigQuery, BigTable, Materialize, AlloyDB, etc Replication/ELT services: Data Stream, Hevo, etc. Data Transformation services: Spark, Dataproc, etc Scripting languages: SQL, Python, Go. Cloud platform services in GCP and analogous systems: Cloud Storage, Cloud Compute Engine, Cloud Functions, Kubernetes Engine etc. Data Processing and Messaging Systems: Kafka, Pulsar, Flink Code version control: Git Data pipeline and workflow tools: Argo, Airflow, Cloud Composer. Monitoring and Observability platforms: Prometheus, Grafana, ELK stack, Datadog Infrastructure as Code platforms: Terraform, Google Cloud Deployment Manager. Other platform tools such as Redis, FastAPI, and Streamlit.
  • Enhance the capabilities of our existing Core Data platforms and develop new integrations with both internal and external APIs within the Data organization.
  • Work closely with DevOps, architects, and engineers to ensure the success of the Core Data platform.
  • Collaborate with Analytics Engineers to enhance data transformation processes, streamline CI/CD pipelines, and optimize team collaboration workflows.
  • Architect and implement Infrastructure as Code (IaC) solutions to automate and streamline the deployment and management of data infrastructure.
  • Develop and manage CI/CD pipelines to automate and streamline the deployment of data solutions.
  • Ensure code is thoroughly tested, effectively integrated, and efficiently deployed, in alignment with industry best practices for version control, automation, and quality assurance.
  • Serve as a Data Engineering thought leader within the broader PrizePicks technology organization by staying current with emerging technologies, implementing innovative solutions, and sharing knowledge and best practices with junior team members and collaborators.
  • Provide on-call support as part of a shared rotation between the Data and Analytics Engineering teams to maintain system reliability and respond to critical issues.

LeadershipPostgreSQLPythonSQLApache AirflowBashCloud ComputingETLGCPGitKafkaKubernetesData engineeringData scienceREST APICI/CDRESTful APIsMentoringTerraformData modeling

Posted 2 days ago
Apply
Apply
🔥 Staff Data Engineer
Posted 3 days ago

📍 United States

🧭 Full-Time

💸 160000.0 - 230000.0 USD per year

🔍 Daily Fantasy Sports

  • 7+ years of experience in a data Engineering, or data-oriented software engineering role creating and pushing end-to-end data engineering pipelines.
  • 3+ years of experience acting as technical lead and providing mentorship and feedback to junior engineers.
  • Extensive experience building and optimizing cloud-based data streaming pipelines and infrastructure.
  • Extensive experience exposing real-time predictive model outputs to production-grade systems leveraging large-scale distributed data processing and model training.
  • Experience in most of the following: SQL/NoSQL databases/warehouses: Postgres, BigQuery, BigTable, Materialize, AlloyDB, etc
  • Replication/ELT services: Data Stream, Hevo, etc.
  • Data Transformation services: Spark, Dataproc, etc
  • Scripting languages: SQL, Python, Go.
  • Cloud platform services in GCP and analogous systems: Cloud Storage, Cloud Compute Engine, Cloud Functions, Kubernetes Engine etc.
  • Data Processing and Messaging Systems: Kafka, Pulsar, Flink
  • Code version control: Git
  • Data pipeline and workflow tools: Argo, Airflow, Cloud Composer.
  • Monitoring and Observability platforms: Prometheus, Grafana, ELK stack, Datadog
  • Infrastructure as Code platforms: Terraform, Google Cloud Deployment Manager.
  • Other platform tools such as Redis, FastAPI, and Streamlit.
  • Excellent organizational, communication, presentation, and collaboration experience with organizational technical and non-technical teams
  • Graduate degree in Computer Science, Mathematics, Informatics, Information Systems or other quantitative field
  • Enhance the capabilities of our existing Core Data Platform and develop new integrations with both internal and external APIs within the Data organization.
  • Develop and maintain advanced data pipelines and transformation logic using Python and Go, ensuring efficient and reliable data processing.
  • Collaborate with Data Scientists and Data Science Engineers to support the needs of advanced ML development.
  • Collaborate with Analytics Engineers to enhance data transformation processes, streamline CI/CD pipelines, and optimize team collaboration workflows Using DBT.
  • Work closely with DevOps and Infrastructure teams to ensure the maturity and success of the Core Data platform.
  • Guide teams in implementing and maintaining comprehensive monitoring, alerting, and documentation practices, and coordinate with Engineering teams to ensure continuous feature availability.
  • Design and implement Infrastructure as Code (IaC) solutions to automate and streamline data infrastructure deployment, ensuring scalable, consistent configurations aligned with data engineering best practices.
  • Build and maintain CI/CD pipelines to automate the deployment of data solutions, ensuring robust testing, seamless integration, and adherence to best practices in version control, automation, and quality assurance.
  • Experienced in designing and automating data governance workflows and tool integrations across complex environments, ensuring data integrity and protection throughout the data lifecycle.
  • Serve as a Staff Engineer within the broader PrizePicks technology organization by staying current with emerging technologies, implementing innovative solutions, and sharing knowledge and best practices with junior team members and collaborators.
  • Ensure code is thoroughly tested, effectively integrated, and efficiently deployed, in alignment with industry best practices for version control, automation, and quality assurance.
  • Mentor and support junior engineers by providing guidance, coaching and educational opportunities
  • Provide on-call support as part of a shared rotation between the Data and Analytics Engineering teams to maintain system reliability and respond to critical issues.

LeadershipPythonSQLCloud ComputingETLGCPGitKafkaKubernetesAirflowData engineeringGoPostgresREST APISparkCI/CDMentoringDevOpsTerraformData visualizationData modelingScripting

Posted 3 days ago
Apply
Apply
🔥 Staff Data Engineer
Posted 4 days ago

📍 United States, Canada

🧭 Full-Time

💸 158000.0 - 239000.0 USD per year

🔍 Software Development

🏢 Company: 1Password

  • Minimum of 8+ years of professional software engineering experience.
  • Minimum of 7 years technical engineering experience building data processing applications (batch and streaming) with coding in languages.
  • In-depth, hands-on experience on extensible data modeling, query optimizations and work in Java, Scala, Python, and related technologies.
  • Experience in data modeling across external facing product insights and business processes, such as revenue/sales operations, finance, and marketing.
  • Experience with Big Data query engines such as Hive, Presto, Trino, Spark.
  • Experience with data stores such as Redshift, MySQL, Postgres, Snowflake, etc.
  • Experience using Realtime technologies like Apache Kafka, Kinesis, Flink, etc.
  • Experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP with extensive use of datastores like RDBMS, key-value stores, etc.
  • Experience leveraging distributed systems at scale and systems knowledge on infrastructure hardware, resources bare-metal hosts to containers to networking.
  • Design, develop, and automate large-scale, high-performance batch and streaming data processing systems to drive business growth and enhance product experience.
  • Build data engineering strategy that supports a rapidly growing tech company and aligns with the priorities across our product strategy and internal business organizations’ desire to leverage data for more competitive advantages.
  • Build scalable data pipelines using best-in-class software engineering practices.
  • Develop optimal data models for storage and retrieval, meeting critical product and business requirements.
  • Establish and execute short and long-term architectural roadmaps in collaboration with Analytics, Data Platform, Business Systems, Engineering, Privacy and Security.
  • Lead efforts on continuous improvement to the efficiency and flexibility of the data, platform, and services.
  • Mentor Analytics & Data Engineers on best practices, standards and forward-looking approaches on building robust, extensible and reusable data solutions.
  • Influence and evangelize high standard of code quality, system reliability, and performance.

AWSPythonSQLETLGCPJavaKubernetesMySQLSnowflakeAlgorithmsApache KafkaAzureData engineeringData StructuresPostgresRDBMSSparkCI/CDRESTful APIsMentoringScalaData visualizationData modelingSoftware EngineeringData analyticsData management

Posted 4 days ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 135000.0 - 160000.0 USD per year

🔍 Healthcare

🏢 Company: Jobgether👥 11-50💰 $1,493,585 Seed over 2 years agoInternet

  • 5+ years of experience in data engineering roles, preferably in fast-paced or data-centric environments
  • Proficient in SQL and experienced with data warehouses such as Snowflake or Redshift
  • Strong experience with cloud platforms (AWS, GCP, or Azure)
  • Familiarity with workflow management tools like Apache Airflow or Luigi
  • Knowledge of data modeling, warehousing architecture, and pipeline automation best practices
  • Degree in Computer Science, Engineering, Mathematics, or related field (Master’s preferred)
  • Familiarity with healthcare data standards like FHIR or HL7 is a plus
  • Strong problem-solving skills and ability to adapt in a dynamic environment
  • Build, optimize, and maintain highly scalable and reliable data pipelines
  • Collaborate with data scientists and analysts to meet data needs across the business
  • Automate data cleansing, validation, transformation, and mining processes
  • Improve internal data workflows and automate manual processes to enhance scalability
  • Troubleshoot data issues, ensure security compliance, and support infrastructure-related inquiries
  • Deliver high-quality data solutions that empower cross-functional teams with actionable insights

AWSSQLApache AirflowETLGCPSnowflakeAzureData engineeringData modeling

Posted 4 days ago
Apply
Apply
🔥 Staff Data Engineer
Posted 4 days ago

📍 Boston, MA; Vancouver, BC; Chicago, IL; and Vancouver, WA.

🧭 Full-Time

💸 200000.0 - 228000.0 USD per year

🔍 Software Development

🏢 Company: Later👥 1-10Consumer ElectronicsiOSAppsSoftware

  • 10+ years of experience in data engineering, software engineering, or related fields.
  • Proven experience leading the technical strategy and execution of large-scale data platforms.
  • Expertise in cloud technologies (Google Cloud Platform, AWS, Azure) with a focus on scalable data solutions (BigQuery, Snowflake, Redshift, etc.).
  • Strong proficiency in SQL, Python, and distributed data processing frameworks (Apache Spark, Flink, Beam, etc.).
  • Extensive experience with streaming data architectures using Kafka, Flink, Pub/Sub, Kinesis, or similar technologies.
  • Expertise in data modeling, schema design, indexing, partitioning, and performance tuning for analytical workloads, including data governance (security, access control, compliance: GDPR, CCPA, SOC 2)
  • Strong experience designing and optimizing scalable, fault-tolerant data pipelines using workflow orchestration tools like Airflow, Dagster, or Dataflow.
  • Ability to lead and influence engineering teams, drive cross-functional projects, and align stakeholders towards a common data vision.
  • Experience mentoring senior and mid-level data engineers to enhance team performance and skill development.
  • Lead the design and evolution of a scalable data architecture that meets analytical, machine learning, and operational needs.
  • Architect and optimize data pipelines for batch and real-time data processing, ensuring efficiency and reliability.
  • Implement best practices for distributed data processing, ensuring scalability, performance, and cost-effectiveness of data workflows.
  • Define and enforce data governance policies, implement automated validation checks, and establish monitoring frameworks to maintain data integrity.
  • Ensure data security and compliance with industry regulations by designing appropriate access controls, encryption mechanisms, and auditing processes.
  • Drive innovation in data engineering practices by researching and implementing new technologies, tools, and methodologies.
  • Work closely with data scientists, engineers, analysts, and business stakeholders to understand data requirements and deliver impactful solutions.
  • Develop reusable frameworks, libraries, and automation tools to improve efficiency, reliability, and maintainability of data infrastructure.
  • Guide and mentor data engineers, fostering a high-performing engineering culture through best practices, peer reviews, and knowledge sharing.
  • Establish and monitor SLAs for data pipelines, proactively identifying and mitigating risks to ensure high availability and reliability.

AWSLeadershipPythonSQLCloud ComputingETLGCPKafkaKubernetesSnowflakeAirflowAzureData engineeringCommunication SkillsAnalytical SkillsProblem SolvingMentoringDevOpsData visualizationData modelingData analyticsData management

Posted 4 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 4 days ago

📍 Boston, MA; Vancouver, BC; Chicago, IL; and Vancouver, WA

💸 160000.0 - 190000.0 USD per year

🔍 Social Media Marketing

🏢 Company: Later👥 1-10Consumer ElectronicsiOSAppsSoftware

  • Minimum of 5 years in data engineering or related fields, with a strong focus on building data infrastructure and pipelines.
  • Bachelor’s degree in Computer Science, Engineering, or a related technical field; advanced degree preferred.
  • Design and build a robust data warehouse architecture.
  • Design, build, and maintain scalable data pipelines for both batch and real-time processing, ensuring high availability and reliability.
  • Develop reliable transformation layers and data pipelines from ambiguous business processes using tools like DBT.
  • Establish optimized data architectures using cloud technologies, and implement both batch and streaming data processing systems.
  • Enforce data quality checks and governance practices to maintain data integrity and compliance.
  • Work with data scientists, product managers, and business stakeholders to understand data needs and deliver actionable insights.
  • Analyze and optimize data pipelines for performance and cost-effectiveness.

AWSSQLApache AirflowCloud ComputingETLData engineering

Posted 4 days ago
Apply
Apply
🔥 Data Engineer
Posted 4 days ago

📍 United States

🧭 Temporary

💸 103500.0 - 143500.0 USD per year

🔍 Public health

  • Minimum of 5 years of relevant experience in data engineering.
  • Proficiency in programming languages commonly used in data engineering, such as Python, Java, Scala, or SQL. Candidate should be able to implement data automations within existing frameworks as opposed to writing one-off scripts.
  • Experience with large-scale projects using Amazon Web Services is required. Certification is preferred.
  • Strong technical writing skills for creating documentation, policies, and procedures.
  • Experience with project planning, including developing timelines, setting milestones, and managing resources.
  • Knowledge of data warehousing concepts and tools.
  • Experience with cloud computing platforms.
  • Experience with data security and data governance.
  • Experience regarding engineering best practices such as source control, automated testing, continuous integration and deployment, and peer review.
  • Expertise in data modeling, ETL (Extract, Transform, Load) processes, and data integration techniques.
  • Strong analytical thinking and problem-solving abilities.
  • Excellent verbal and written communication skills, including the ability to convey technical concepts to non-technical partners effectively.
  • Flexibility to adapt to evolving project requirements and priorities.
  • Outstanding interpersonal and teamwork skills; and the ability to develop productive working relationships with colleagues and partners.
  • Experience working in a virtual environment with remote partners and teams.
  • Proficiency in Microsoft Office.
  • Collaborate with data scientists, analysts, and other partners to understand their data needs and requirements, and to ensure that the data infrastructure supports the organization's goals and objectives.
  • Collaborate with cross-functional teams to understand data requirements and design scalable solutions that meet business needs.
  • Implement and maintain ETL processes to ensure the accuracy, completeness, and consistency of data.
  • Implement security measures to protect sensitive information.
  • Design and manage data storage systems, including relational databases, NoSQL databases, and data warehouses.
  • Knowledgeable about industry trends, best practices, and emerging technologies in data engineering, and incorporating the trends into the organization's data infrastructure.
  • Create and manage the systems and pipelines that enable efficient and reliable flow of data, including ingestion, processing, and storage.
  • Collect data from various sources, transforming and cleaning it to ensure accuracy and consistency. Load data into storage systems or data warehouses.
  • Optimize data pipelines, infrastructure, and workflows for performance and scalability.
  • Monitor data pipelines and systems for performance issues, errors, and anomalies, and implement solutions to address them.
  • Provide technical guidance to other staff.
  • Communicate effectively with partners at all levels of the organization to gather requirements, provide updates, and present findings.

AWSProject ManagementPythonSQLApache AirflowCloud ComputingETLApache KafkaData engineeringData StructuresREST APISparkCommunication SkillsAnalytical SkillsCI/CDProblem SolvingAgile methodologiesRESTful APIsTerraformExcellent communication skillsData visualizationData modelingData analyticsData management

Posted 4 days ago
Apply