Apply

Senior Data Infrastructure Engineer

Posted 2024-11-15

View full description

💎 Seniority level: Senior, 7+ years of software engineering (4+ as data engineer)

📍 Location: United States

💸 Salary: 140000 - 180000 USD per year

🔍 Industry: Fintech, specialty finance

🏢 Company: Libertas Funding

🗣️ Languages: English

⏳ Experience: 7+ years of software engineering (4+ as data engineer)

🪄 Skills: AWSDockerPythonSoftware DevelopmentSQLAgileBusiness IntelligenceETLGitKubernetesMySQLData engineeringData sciencePostgresServerlessNosqlCommunication SkillsCI/CDDevOpsTerraformDocumentationCompliance

Requirements:
  • Experience configuring monitoring / alerting different attributes in AWS.
  • Proven experience implementing infrastructure as code (IaC) using Terraform / Ansible.
  • 7+ years of software engineering (4+ as data engineer) experience designing, developing, and delivering application solutions for enterprise-level development and integration projects.
  • Strong communication skills and previous experience working with cross-functional business groups.
  • Experience with maintaining, updating, and integrating data pipelines and data warehouses.
  • Proficiency in capture and maintenance of data in SQL and NoSQL databases.
  • Proficiency in fast-paced software engineering team, following software engineering development cycles.
  • Strong sense of agency and self-initiative / intrinsic motivation to push projects forward, learn new tech & tools.
  • Process-oriented, detail-oriented, and analytically oriented mindset.
  • Natural enjoyment from solving complex problems with a methodical approach.
  • Strong analytical and problem-solving skills with a focus on solving business problems.
  • Experience working in Agile software development processes.
  • Deep knowledge of Continuous Integration and Delivery and how to build deployment pipelines and infrastructure.
  • Experience with CloudWatch, IAM, Lambda.
  • Terraform, Cloud Formation, Puppet, or other Infrastructure as Code (IaC) technologies.
  • Proficiency in Git, Git Actions, and Git workflows.
  • Experience with logging and monitoring tools.
  • Strong development skills while implementing best practices in development to prevent breaking changes.
  • Knowledge of data governance best practices, including data quality/integrity and privacy.
Responsibilities:
  • Scope, design, and implement repeatable automation of the data, build, test, deploy and release processes.
  • Automate the timely and accurate delivery of reporting and analytics data.
  • Build and maintain robust, observable (ETL) data pipelines.
  • Update and maintain AWS cloud and on-premises data warehouse configuration and components.
  • Investigate and remediate technical issues.
  • Create and maintain documentation as it relates to system configuration, mapping, processes, and service records.
  • Build and maintain continuous integration and continuous delivery (CI/CD) technologies.
  • Provide technical expertise at building and managing infrastructure as code (IaC).
  • Maintain performance and SLA metrics for build/release systems.
  • Collaborate within the development team to enable the team to meet goals and objectives.
  • Proactively document and communicate knowledge to enable others to quickly solve the next challenge.
  • Adhere to compliance procedures and internal/operational risk controls.
Apply

Related Jobs

Apply

📍 United States, BC, ON, Canada

🧭 Full-Time

💸 $139,000 - $248,000 per year

🔍 Web development

  • 5+ years of experience as a Data Infrastructure Engineer or in related roles like Platform Engineer, SRE, DevOps or Backend Engineer.
  • Strong experience with provisioning and managing data infrastructure components like Kafka, Spark, and Airflow.
  • Proficiency with cloud services and environments (compute, storage, networking, identity management, infrastructure as code, etc.).
  • Experience with containerization technologies like Docker and Kubernetes.
  • Expertise in infrastructure as code tools like Terraform and Pulumi.
  • Solid understanding of networking concepts and configurations, including VPCs, load balancers, and endpoints.
  • Experience with monitoring and logging tools.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.

  • Provision and deploy infrastructure using Pulumi for Kafka, Spark, Airflow, Athena, and other critical systems on AWS.
  • Manage and maintain clusters, ensuring optimal performance and reliability, including implementing auto-scaling and right-sizing instances.
  • Configure and manage VPCs, load balancers, and VPC endpoints for secure communication between internal and external services.
  • Manage IAM roles, apply security patches, plan and execute version upgrades, and ensure compliance with regulations such as GDPR.
  • Design and implement high-availability solutions across multiple zones and regions, including backups, multi-region replication, and disaster recovery plans.
  • Oversee S3 data lake management, including file size management, compaction, encryption, and compression to maximize storage efficiency.
  • Implement caching strategies, indexing, and query optimization to ensure efficient data retrieval and processing.
  • Spearhead initiatives for optimizing performance, capacity planning, ensuring fault tolerance, and implementing failure recovery across all infrastructure components.
  • Implement monitoring and logging using tools like Datadog, CloudWatch and OpenSearch.
  • Develop services, tools and automation to simplify infrastructure complexity for other engineering teams, enabling them to focus on building great products.
  • Participate in all engineering activities including incident response, interviewing, designing and reviewing technical specifications, code review, and releasing new functionality.
  • Mentor, coach, and inspire a team of engineers of various levels.

AWSDockerKafkaKubernetesAirflowSparkCollaborationProblem SolvingTerraform

Posted 2024-09-25
Apply