Apply

Data Engineer-I

Posted 2 days agoViewed

View full description

📍 Location: USA

🔍 Industry: Healthcare

🏢 Company: Innovaccer Inc.

🗣️ Languages: English

🪄 Skills: PythonSQLBashETLMicrosoft AzurePostgresData modeling

Requirements:
  • SQL knowledge
  • ETL/ELT/Data pipeline knowledge
  • Python knowledge
  • Powershell / Bash knowledge
  • Excellent problem-solving and effective communication skills
  • Self-motivation, integrity and honesty
Responsibilities:
  • Collaborate with team, management, departments using virtual tools
  • Run Production data pipelines/processes, ensure the integrity of the data, and send out deliverables based on requirement/runbook documentation
  • Coordinate with the various technical teams to resolve issues/bugs/optimize said production processes
  • Coordinate with internal client facing team members to communicate the status of deliverables
  • Help develop/improve technical documentation to guide future software development projects and operations
  • Dedicated time to explore building out tech stack and capabilities where there are applicable use cases
  • Provide critical thinking, technical innovation, and extra attention to detail by serving as a trusted team member and peer code reviewer
  • Assists with external client communications when deliverables or receivables do not meet technical or project requirements, ensuring timely resolution and alignment
Apply

Related Jobs

Apply

📍 United States

🧭 Full-Time

💸 90000.0 - 110000.0 USD per year

🔍 Software Development

🏢 Company: Energy Solutions - USA

  • BS/BA in Computer Science, Physics/Engineering, Business or Mathematics
  • Experience in building ETL data pipelines, real-time pipelines are a plus
  • Strong programming skills in Python and SQL
  • Minimum 2 years of data warehouse development and strong fundamentals in dimensional data modeling
  • Minimum 2 years of experience creating SQL queries and ETL design, implementation, and maintenance
  • Minimum 2 years of experience developing data pipelines using Python
  • Minimum 2 years of experience with AWS, AWS Redshift, AWS Data Engineering and ML Ops tools and services (i.e: S3, Redshift, RDS)
  • Proven ability to aggregate, normalize, munge, analyze, and summarize value from disparate datasets
  • Strong oral, written, and interpersonal communication skills
  • High degree of accuracy and attention to detail
  • Ability to establish priorities and work independently with minimum supervision
  • Collaborate with stakeholders to understand data requirements and develop data-driven solutions that support business goals
  • Analyze data sources and pipelines needed for the various data projects
  • Design and implement scalable data pipelines, ETL processes, and data integration solutions for the IRIS project using Python, SQL, and AWS technologies
  • Develop and maintain a data warehouse and optimize data storage, processing, and retrieval
  • Build schema, populate the database, and automate reporting
  • Document and maintain technical specifications, ETL processes, data mappings, and dictionaries
  • Support the Data Science Center of Excellence (DSCOE) with other data engineering needs

AWSPythonSQLETLData modeling

Posted 15 days ago
Apply
Apply

📍 U.S.

💸 142500.0 - 155000.0 USD per year

🔍 Music technology

🏢 Company: Splice👥 101-250💰 $55,000,000 Series D about 4 years agoMedia and EntertainmentMusicMachine LearningSoftware

  • 5+ years of experience building scalable and durable software.
  • Demonstrated mastery of Python, SQL, and Unix fundamentals.
  • Operational excellence in maintaining Data Warehouses such as GCP BigQuery or AWS RedShift.
  • Strong familiarity with data transformation frameworks like sqlmesh or dbt.
  • Experience with business intelligence platforms or data visualization frameworks like Looker, Hashtable, or Observable.
  • Strong debugging skills, especially with distributed systems.
  • Experience building supporting Cloud Infrastructure with Google Cloud Platform (GCP) and Amazon Web Services (AWS).
  • Clear and consistent communication in a distributed environment.
  • Own and operate the structure of the Data Warehouse, ensuring reliable ingestion of mission-critical data and reliable builds of our pipelines.
  • Build and maintain self-service tools and extensible datasets for organizational insights.
  • Identify and execute projects addressing scalability issues, automating workflows, and simplifying datasets for analytics.
  • Ensure data quality through tests, observability, RFC reviews, and guidance in data modeling.
  • Participate in business hours-only on-call rotation to maintain system uptime and quality.
  • Cultivate a culture of data literacy and data-driven decision making.

AWSPythonSQLGCPData engineeringData visualizationDebugging

Posted about 2 months ago
Apply