Apply

Software Engineer, Data

Posted 2024-11-09

View full description

💎 Seniority level: Middle, 5+ years of experience in software development

📍 Location: United States

💸 Salary: 100000 - 135000 USD per year

🔍 Industry: Healthcare

🏢 Company: Loyal

⏳ Experience: 5+ years of experience in software development

🪄 Skills: Software DevelopmentSQLASP.NETETLMicrosoft .NETMicrosoft SQL ServerNUnitC#Product DevelopmentAsp.NETData engineering.NET

Requirements:
  • 5+ years of experience in software development with exposure to data engineering.
  • 3+ years of experience with Microsoft .NET technology stack (C#, .NET, ASP.NET, Web APIs, Microsoft SQL Server).
  • Proficiency with T-SQL and unit testing frameworks (xUnit, NUnit, etc.).
  • Experience in a SaaS environment or ambiguous settings.
  • Bachelor’s degree in computer science, data science, or equivalent work experience.
Responsibilities:
  • Collaborate closely with internal teams to design, build, and maintain an internal ETL tool.
  • Build and maintain custom onboarding solutions for diverse client environments.
  • Develop backend services focused on data-intensive applications and infrastructure.
  • Solve product and client-specific business needs through software and enhancements.
  • Lead mid-sized development projects from start to finish.
  • Actively participate in code reviews and troubleshoot issues in upper environments.
  • Mentor team members to share knowledge and foster a strong team culture.
Apply

Related Jobs

Apply

📍 United States of America

🧭 Full-Time

💸 90000 - 215000 USD per year

🔍 Insurance

🏢 Company: External

  • Minimum 5 years of experience with software development in one or more programming languages, and with data structures/algorithms.
  • Minimum 3 years of experience testing, maintaining, or launching software products.
  • 1 year of experience with software design and architecture.
  • Minimum 3 years of experience developing large-scale infrastructure, distributed systems or networks.

  • Write and test product or system development code.
  • Participate in, or lead design reviews with peers and stakeholders to decide amongst available technologies.
  • Review code developed by other developers and provide feedback.
  • Contribute to existing documentation or educational content.
  • Triage product or system issues and debug/track/resolve.
  • Collaborate with product managers and engineering teams.

Software DevelopmentStrategyAlgorithmsData StructuresDocumentation

Posted 2024-11-21
Apply
Apply

📍 Americas

🔍 Digital transformation, Software Engineering

  • Collaborative engineers are desired.
  • Passionate about work, team dynamics, and customer engagement.
  • Ability to communicate effectively across multiple time zones.

  • Work in the Data Science Chapter to bring research to production deployments.
  • Develop scalable and reliable systems.
  • Collaborate within the SaaS Engineering team.
  • Support the integration of differentiating search technologies into the cloud-based SaaS platform.

Machine LearningData scienceCommunication Skills

Posted 2024-11-15
Apply
Apply

📍 San Francisco Bay Area

🔍 Financial planning and decision-making software

  • Confidence with a chosen programming language.
  • Ability to quickly learn new technologies.
  • Strong software engineering and computer science fundamentals.
  • Extensive experience with common big data workflow frameworks and solutions.

  • Laying the foundation of an exceptional data engineering practice.
  • Collaborating with the team to enhance big data workflow frameworks and solutions.

Backend DevelopmentPythonSoftware DevelopmentSQLApache AirflowApache HadoopData AnalysisElasticSearchGitHadoopAirflowData analysisData engineeringElasticsearchREST APIRESTful APIs

Posted 2024-11-10
Apply
Apply

📍 United States

🔍 Life sciences

  • Applicants must have the unrestricted right to work in the United States.
  • Veeva will not provide sponsorship at this time.

  • Spearhead the development of new architecture for the Data platform from the ground up.
  • Design and build a resilient, scalable cloud-based platform along with its accompanying tools.
  • Empower Opendata teams to efficiently create and distribute valuable data assets.
  • Exercise end-to-end ownership for the project.

Backend DevelopmentLeadershipSoftware DevelopmentCross-functional Team LeadershipCommunication SkillsAnalytical SkillsCollaboration

Posted 2024-11-07
Apply
Apply

📍 United States

🔍 Life sciences

  • Applicants must have the unrestricted right to work in the United States.
  • Veeva will not provide sponsorship at this time.

  • Spearhead the development of entirely new architecture for Veeva's Data platform from the ground up.
  • Design and build a resilient, scalable cloud-based platform along with its accompanying tools.
  • Empower Opendata teams to efficiently create and distribute valuable data assets.
  • End-to-end ownership of projects, guiding the course of action and executing solutions creatively.

Backend DevelopmentSoftware DevelopmentCloud ComputingData AnalysisGitJavaJavascriptSoftware ArchitectureJavaScriptData analysis

Posted 2024-11-07
Apply
Apply

📍 US

🧭 Full-Time

💸 110000 - 140000 USD per year

🔍 Marketing and data intelligence

  • Collaboration with senior engineers.
  • Strategic planning for data initiatives.
  • Continuous learning and improvement.
  • Technical excellence in data solutions.

  • Contribute to the development and automation of data pipelines.
  • Ensure technical excellence and innovation in complex data projects.
  • Collaborate with senior engineers to influence strategy and technical roadmap.
  • Participate in implementing innovative tooling and architecture patterns.
  • Foster a culture of data engineering excellence.

Software DevelopmentSQLETLData engineering

Posted 2024-11-07
Apply
Apply

📍 USA

🧭 Full-Time

💸 169000 - 240000 USD per year

🔍 Financial services

  • 5+ years of industry experience in building large scale production systems.
  • Experience building and owning large-scale stream processing systems.
  • Experience building and operating robust and highly available infrastructure.
  • Working knowledge of Relational and NoSQL databases.
  • Experience working with Data Warehouse solutions.
  • Experience with industry standard stream processing frameworks like Spark, Samza, Flink, Beam etc.
  • Experience leading technical projects and mentoring junior engineers.
  • Exceptionally collaborative with a history of delivering complex technical projects and working closely with stakeholders.
  • This position requires either equivalent practical experience or a Bachelor’s degree in a related field.

  • Help support the Data Platform that forms the backbone for several thousands of offline workloads at Affirm.
  • Design and build data infrastructure systems, services, and tools to handle new Affirm products and business requirements that securely scale over millions of users and their transactions.
  • Build frameworks and services which will be used by other engineering teams at Affirm to manage billions of dollars in loans and power customer experiences.
  • Improve the reliability and efficiency of the Data Platform at scale and high reliability.
  • Engage other teams at Affirm about their use of the Data platform to ensure we are always building the right thing.

Backend DevelopmentLeadershipSoftware DevelopmentSQLData AnalysisElasticSearchKafkaCross-functional Team LeadershipApache KafkaData analysisElasticsearchSparkCollaboration

Posted 2024-10-24
Apply
Apply

📍 United States

🧭 Full-Time

💸 $232,560 - $290,700 per year

🔍 Data integration and management

  • 5+ years of hands-on or research experience with high-performance relational data management systems.
  • Deep understanding of infrastructure & software optimizations and performance engineering to drive significant performance, latency, and availability improvements.
  • Proven track record of leading and delivering large and complicated projects.
  • Strong development skills in Java and C++.
  • Solid experience with public clouds (AWS, Azure, GCP).
  • Demonstrated knowledge of columnar storage formats.
  • Growth mindset and excitement about breaking the status quo by seeking innovative solutions.
  • Excellent team player who is consistent in making everyone around you better.
  • Strongly prefer an MS or PhD in Computer Science, ideally focusing on database management and/or storage engines.

  • Partner closely with product teams to understand requirements and design cutting-edge new capabilities that go directly into customer’s hands.
  • Design, develop, implement, and operate highly reliable large-scale data lake systems in cooperation with a dedicated data lake engineering team.
  • Contribute to open-source projects such as DuckDB.
  • Embrace Fivetran innovations with open-source standards and toolsets.
  • Analyze fault-tolerance and high availability issues, performance and scale challenges, and solve them.
  • Ensure operational excellence of the services and meet the commitments to our customers regarding security, reliability, availability, and performance.
  • Set technical directions and influence cross-functional teams.

AWSGCPJavaKubernetesC++AzuregRPCPostgres

Posted 2024-10-12
Apply
Apply

📍 Spain, US, FR, UK, IN, IT, MX, IE

🔍 Cloud development technology

🏢 Company: LocalStack

  • Strong hands-on experience with modern Python development including type hinting and unit testing.
  • Strong background in data processing and systems programming in Unix environments.
  • Strong understanding of SQL and transaction management in relational databases.
  • Proficiency with PostgreSQL, including server configuration and writing custom functions.
  • Experience with cloud computing APIs and platforms like AWS or Azure.
  • Ideally, hands-on experience with data platforms like Snowflake and AWS services.
  • Experience with SQL parsing and query modification libraries.
  • Decent knowledge of Java and big data platforms like Presto and Spark.
  • Prior experience contributing to open source projects is a plus.

  • Drive and co-own the development of our Snowflake emulator, a new product with beta users evaluating it.
  • Reverse-engineer data platform APIs to reproduce local behavior using database products.
  • Write unit and integration tests to ensure parity with real systems.
  • Conduct technical spikes and document architectural decisions.
  • Integrate open source tools into solutions and maintain documentation.
  • Conduct performance evaluations and optimizations.
  • Run internal demos and knowledge sharing sessions.
  • Communicate with customers to understand requirements and resolve issues.
  • Work with the Data team to embed analytics into the product for insights.

AWSDockerPostgreSQLPythonSoftware DevelopmentSQLCloud ComputingDqlHadoopJavaSnowflakeAzurePostgresSpark

Posted 2024-10-08
Apply
Apply

📍 United States

🧭 Full-Time

💸 $240,000 - $270,000 per year

🔍 Blockchain intelligence data platform

  • Bachelor's degree (or equivalent) in Computer Science or a related field.
  • 5+ years of experience in building distributed system architecture, with a particular focus on incremental updates from inception to production.
  • Strong programming skills in Python and SQL.
  • Deep technical expertise in advanced data structures and algorithms for incremental updating of data stores (e.g., Graphs, Trees, Hash Maps).
  • Comprehensive knowledge across all facets of data engineering, including implementing and managing incremental updates in data stores like BigQuery, Snowflake, RedShift, Athena, Hive, and Postgres.
  • Orchestrating data pipelines and workflows focused on incremental processing using tools such as Airflow, DBT, Luigi, Azkaban, and Storm.
  • Developing and optimizing data processing technologies and streaming workflows for incremental updates (e.g., Spark, Kafka, Flink).
  • Deploying and monitoring scalable, incremental update systems in public cloud environments (e.g., Docker, Terraform, Kubernetes, Datadog).
  • Expertise in loading, querying, and transforming large datasets with a focus on efficiency and incremental growth.

  • Design and build our Cloud Data Warehouse with a focus on incremental updates to improve cost efficiency and scalability.
  • Research innovative methods to incrementally optimize data processing, storage, and retrieval to support efficient data analytics and insights.
  • Develop and maintain ETL pipelines that transform and incrementally process petabytes of structured and unstructured data to enable data-driven decision-making.
  • Collaborate with cross-functional teams to design and implement new data models and tools focused on accelerating innovation through incremental updates.
  • Continuously monitor and optimize the Data Platform's performance, focusing on enhancing cost efficiency, scalability, and reliability.

DockerPythonSQLETLKafkaKubernetesMachine LearningSnowflakeAirflowAlgorithmsData engineeringData scienceData StructuresPostgresSparkCollaborationTerraform

Posted 2024-09-25
Apply