Remote JavaScript Jobs

Data engineering
803 jobs found. to receive daily emails with new job openings that match your preferences.
803 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

📍 Brazil

🔍 Data Governance

🏢 Company: TELUS Digital Brazil

  • At least 3 years of experience in Data Governance, Metadata Management, Data Cataloging, or Data Engineering.
  • Have actively participated in the design, implementation, and management of data governance frameworks and data catalogs.
  • Experience working with Colibra and a strong understanding of the Collibra Operating Model, workflows, domains, and policies.
  • Experience working with APIs.
  • Experience with a low-code/no-code ETL tool, such as Informatica
  • Experience working with databases and data modeling projects, as well as practical experience utilizing SQL
  • Effective English communication - able to explain technical and non-technical concepts to different audiences
  • Experience with a general-purpose programming language such as Python or Scala
  • Ability to work well in teams and interact effectively with others
  • Ability to work independently and manage multiple tasks simultaneously while meeting deadlines
  • Conduct detailed assessments of the customer’s data governance framework and current-state Collibra implementation
  • Translate business needs into functional specifications for Collibra use cases
  • Serve as a trusted advisor to the customer’s data governance leadership
  • Lead requirement-gathering sessions and workshops with business users and technical teams
  • Collaborate with cross-functional teams to ensure data quality and support data-driven decision-making to strive for greater functionality in our data systems
  • Collaborate with project managers and product owners to assist in prioritizing, estimating, and planning development tasks
  • Provide constructive feedback and share expertise with fellow team members, fostering mutual growth and learning
  • Demonstrate a commitment to accessibility and ensure that your work considers and positively impacts others

PythonSQLETLData engineeringRESTful APIsData visualizationData modelingData analyticsData management

Posted 41 minutes ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 135000.0 - 170000.0 USD per year

🔍 Healthcare

🏢 Company: SmarterDx👥 101-250💰 $50,000,000 Series B about 1 year agoArtificial Intelligence (AI)HospitalInformation TechnologyHealth Care

  • 3+ years of analytics engineering experience in the healthcare industry, involving clinical and/or billing/claims data.
  • You are very well-versed in SQL and ETL processes, significant experience in dbt is a must
  • You have experience in a general purpose programming language (Python, Java, Scala, Ruby, etc.)
  • You have strong experience in data modeling and their implementation in production  data pipelines.
  • You are comfortable with the essentials of data orchestration
  • Designing, developing, and maintaining dbt data models that support our healthcare analytics products.
  • Integrating and transforming customer data to conform to our data specifications and standards.
  • Collaborating with cross-functional teams to translate data and business requirements into effective data models.
  • Configuring and improving data pipelines that integrate and connect the data models.
  • Conducting QA and testing on data models to ensure data accuracy, reliability, and performance.
  • Applying industry standards and best practices around data modeling, testing, and data pipelining.
  • Participating in a rota with other engineers to help investigate and resolve data related issues.

AWSPostgreSQLPythonSQLApache AirflowETLSnowflakeData engineeringData modelingData analytics

Posted 43 minutes ago
Apply
Apply

📍 United States

💸 175000.0 - 200000.0 USD per year

🔍 Manufacturing

  • 10+ years of experience in data, analytics, and solution architecture roles.
  • Proven success in a pre-sales or client-facing architecture role.
  • Deep understanding of Microsoft Azure data services, including: Azure Synapse, Data Factory, Data Lake, Databricks/Spark, Fabric, etc.
  • Experience aligning modern data architectures to business strategies in the manufacturing or industrial sector.
  • Experience scoping, pricing, and estimating delivery efforts in early-stage client engagements.
  • Clear and confident communicator who can build trust with technical and business stakeholders alike.
  • Lead strategic discovery sessions with executives and business stakeholders to understand business goals, challenges, and technical landscapes.
  • Define solution architectures and delivery roadmaps that align with client objectives and Microsoft best practices.
  • Scope, estimate, and present proposals for complex Data, AI, and Azure-based projects.
  • Architect end-to-end data and analytics solutions using Microsoft Azure (Data Lake, Synapse, ADF, Power BI, Microsoft Fabric, Azure AI Services, etc.).
  • Oversee solution quality from ideation through handoff to delivery teams.
  • Support and guide sales consultants in an effort to align solutions, progress deals and effectively close business.
  • Collaborate with Delivery to ensure project success and customer satisfaction.
  • Contribute to thought leadership, IP development, and reusable solution frameworks.

SQLCloud ComputingETLMachine LearningMicrosoft AzureMicrosoft Power BIAzureData engineeringRESTful APIsData modelingData analyticsData management

Posted about 2 hours ago
Apply
Apply
🔥 Product Manager
Posted about 2 hours ago

📍 United States, Africa, Latin America, Asia

🧭 Full-Time

🔍 AI

🏢 Company: Pareto.AI👥 101-250💰 $4,500,000 Seed about 3 years agoSoftware

  • 3+ years shipping data infrastructure or ML-adjacent SaaS products with measurable adoption.
  • Proven record of rapid, iterative releases—multiple significant deployments per quarter.
  • Competence with SQL and basic Python; can self-serve analysis without data-engineering support.
  • Familiarity with data-collection pipelines, LLM training stages and workflows, and privacy frameworks (GDPR, CCPA).
  • Comfortable translating technical detail into clear product choices.
  • Exceptional written communication; produce single-page briefs that unblock design and engineering the same day.
  • Own the end-to-end life cycle for assigned platform modules—requirements, prioritization, release, and iteration.
  • Define and monitor metrics for data latency, cost per datapoint, and quality compliance; drive course-corrections in real time.
  • Conduct regular customer and stakeholder interviews; translate insights into backlog refinements and roadmap updates.
  • Facilitate sprint rituals and decision reviews, ensuring blockers are identified and resolved within service-level targets.
  • Present concise status and recommendations to leadership, backing arguments with data.

PythonSQLData AnalysisMachine LearningProduct ManagementData engineeringRESTful APIs

Posted about 2 hours ago
Apply
Apply
🔥 Senior Data Engineer
Posted about 3 hours ago

📍 United States of America

🏢 Company: IDEXX

  • Bachelor’s degree in Computer Science, Computer Engineering, Information Systems, Information Systems Engineering or a related field and 5 years of experience or Master’s degree in Computer Science, Computer Engineering, Information Systems, Information Systems Engineering or a related field and 3 years of related professional experience.
  • Advanced SQL knowledge and experience working with relational databases, including Snowflake, Oracle, Redshift.
  • Experience with AWS or Azure cloud platforms
  • Experience with data pipeline and workflow scheduling tools: Apache Airflow, Informatica.
  • Experience with ETL/ELT tools and data processing techniques
  • Experience in database design, development, and modeling
  • 3 years of related professional experience with object-oriented languages: Python, Java, and Scala
  • Design and implement scalable, reliable distributed data processing frameworks and analytical infrastructure
  • Design metadata and schemas for assigned projects based on a logical model
  • Create scripts for physical data layout
  • Write scripts to load test data
  • Validate schema design
  • Develop and implement node cluster models for unstructured data storage and metadata
  • Design advanced level Structured Query Language (SQL), data definition language (DDL) and Python scripts
  • Define, design, and implement data management, storage, backup and recovery solutions
  • Design automated software deployment functionality
  • Monitor structural performance and utilization, identifying problems and implements solutions
  • Lead the creation of standards, best practices and new processes for operational integration of new technology solutions
  • Ensures environments are compliant with defined standards and operational procedures
  • Implement measures to ensure data accuracy and accessibility, constantly monitoring and refining the performance of data management systems

AWSPythonSQLApache AirflowCloud ComputingETLJavaOracleSnowflakeAzureData engineeringScalaData modelingData management

Posted about 3 hours ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 138000.0 - 150000.0 USD per year

🔍 Education

🏢 Company: Gradient Learning

  • 5+ years of experience working in the K-12 space
  • 3+ years of experience in data product development
  • 3+ years of experience translating complex data into educator-friendly visualizations using Tableau
  • 3+ years of people management experience
  • 3+ years of experience with Snowflake or comparable cloud-based data warehousing platforms (strongly preferred)
  • Experience using AI or machine learning to enhance data analysis or deliver scalable, educator-facing insights (strongly preferred)
  • Familiarity with LTI standards, LMS platforms, and education data interoperability; direct experience with Canvas LMS (strongly preferred)
  • Knowledge of data privacy, security, and protection standards, particularly as they relate to PII and educational data (FERPA, COPPA, etc.) (preferred)
  • Design, Refine, and Lead the Data & Insights Product Strategy
  • Oversee Data & Insights Product Development and Delivery
  • Strengthen Data Infrastructure in Partnership with Information Systems
  • Lead the Data & Insights Product Delivery Team

SQLData AnalysisETLMachine LearningPeople ManagementProduct ManagementSnowflakeUser Experience DesignCross-functional Team LeadershipTableauProduct DevelopmentData engineeringCommunication SkillsAgile methodologiesRESTful APIsData visualizationStakeholder managementStrategic thinkingData modelingData analyticsData management

Posted about 4 hours ago
Apply
Apply

📍 United States

🧭 Full-Time

🔍 Information Security

  • 5+ years of experience in security engineering, with a primary focus on SIEM platforms.
  • Hands-on experience with at least two of the following SIEM platforms: Splunk, Microsoft Sentinel, Elastic, Google SecOps, CrowdStrike NG-SIEM, LogScale
  • 2+ years of experience with Cribl or similar observability pipeline tools (e.g., Logstash, Fluentd, Kafka).
  • Strong knowledge of log formats, data normalization, and event correlation.
  • Familiarity with detection engineering, threat modeling, and MITRE ATT&CK framework.
  • Proficiency with scripting (e.g., Python, PowerShell, Bash) and regular expressions.
  • Deep understanding of logging from cloud (AWS, Azure, GCP) and on-prem environments.
  • Architect, implement, and maintain SIEM solutions with a focus on modern platforms
  • Design and manage log ingestion pipelines using tools such as Cribl Stream, Edge, or Search (or similar).
  • Optimize data routing, enrichment, and filtering to improve SIEM efficiency and cost control.
  • Collaborate with cybersecurity, DevOps, and cloud infrastructure teams to integrate log sources and telemetry data.
  • Develop custom parsers, dashboards, correlation rules, and alerting logic for security analytics and threat detection.
  • Maintain and enhance system reliability, scalability, and performance of logging infrastructure.
  • Provide expertise and guidance on log normalization, storage strategy, and data retention policies.
  • Lead incident response investigations and assist with root cause analysis leveraging SIEM insights.
  • Mentor junior engineers and contribute to strategic security monitoring initiatives.

AWSPythonBashCloud ComputingGCPKafkaKubernetesAPI testingAzureData engineeringCI/CDRESTful APIsLinuxDevOpsJSONAnsibleScripting

Posted about 5 hours ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 185500.0 - 293750.0 USD per year

🔍 Software Development

  • Strong technical expertise in designing and building scalable ML infrastructure.
  • Experience with distributed systems and cloud-based ML platforms.
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Deep understanding of ML workflows, including data pipelines, model training, and deployment.
  • Passion for innovation and eagerness to implement the latest advancements in ML infrastructure.
  • Strong problem-solving skills and ability to optimize complex systems for performance and reliability.
  • Collaborative mindset with excellent communication skills to work across teams.
  • Ability to thrive in a fast-paced, dynamic environment with evolving technical challenges.
  • Design, implement, and optimize distributed systems and infrastructure components to support large-scale machine learning workflows, including data ingestion, feature engineering, model training, and serving.
  • Develop and maintain frameworks, libraries, and tools that streamline the end-to-end machine learning lifecycle, from data preparation and experimentation to model deployment and monitoring.
  • Architect and implement highly available, fault-tolerant, and secure systems that meet the performance and scalability requirements of production machine learning workloads.
  • Collaborate with machine learning researchers and data scientists to understand their requirements and translate them into scalable and efficient software solutions.
  • Stay current with advancements in machine learning infrastructure, distributed computing, and cloud technologies, integrating them into our platform to drive innovation.
  • Mentor junior engineers, conduct code reviews, and uphold engineering best practices to ensure the delivery of high-quality software solutions.

AWSDockerPythonCloud ComputingKubernetesMachine LearningAlgorithmsData engineeringData scienceCI/CDRESTful APIsScalaSoftware Engineering

Posted about 6 hours ago
Apply
Apply
🔥 Data Analyst - Finance
Posted about 7 hours ago

📍 USA

🧭 Full-Time

🔍 Fintech

🏢 Company: Comun

  • Expert-level SQL knowledge with demonstrated ability to optimize complex queries (non-negotiable)
  • At least 3+ years of practical experience in data engineering or analytics roles with financial data
  • 3+ years of experience at a fintech or financial services industries in the payments or lending space at a similar role
  • Solid understanding of finance concepts and principles
  • Proven track record building data pipelines and ETL processes
  • Experience implementing cost modeling and optimization analytics
  • Problem-solving mindset with strong analytical skills
  • Excellent communication skills to explain complex technical and financial concepts
  • Design and implement scalable data pipelines to establish and maintain a solid fund flow process
  • Automate financial reconciliation processes and generate actionable reports
  • Develop and maintain revenue and cost models to identify growth opportunities and provide insights for strategic decision-making
  • Build analytical tools to identify and quantify cost optimization opportunities across the organization
  • Monitor vendor performance metrics and evaluate new vendor opportunities
  • Implement data solutions to detect financial anomalies and uncover efficiency opportunities that drive business value
  • Perform cohort-level performance analysis to develop a deeper understanding on customer unit economics
  • Collaborate with finance, data, growth, product, and engineering teams to develop robust financial data architecture
  • Contribute to our mission of financial inclusion by enabling data-informed product and pricing decisions

AWSPostgreSQLPythonSQLData AnalysisETLSnowflakeData engineeringFastAPIFinancial analysisData modelingFinance

Posted about 7 hours ago
Apply
Apply
🔥 Data Engineer (Contract)
Posted about 8 hours ago

📍 LatAm

🧭 Contract

🏢 Company: AbleRentalProperty ManagementReal Estate

  • 10+ years of data engineering experience with enterprise-scale systems
  • Expertise in Apache Spark and Delta Lake, including ACID transactions, time travel, Z-ordering, and compaction
  • Deep knowledge of Databricks (Jobs, Clusters, Workspaces, Delta Live Tables, Unity Catalog)
  • Experience building scalable ETL/ELT pipelines using tools like Airflow, Glue, Dataflow, or ADF
  • Advanced SQL for data modeling and transformation
  • Strong programming skills in Python (or Scala)
  • Hands-on experience with data formats such as Parquet, Avro, and JSON
  • Familiarity with schema evolution, versioning, and backfilling strategies
  • Working knowledge of at least one major cloud platform: AWS (S3, Athena, Redshift, Glue Catalog, Step Functions), GCP (BigQuery, Cloud Storage, Dataflow, Pub/Sub), or Azure (Synapse, Data Factory, Azure Databricks)
  • Experience designing data architectures with real-time or streaming data (Kafka, Kinesis)
  • Consulting or client-facing experience with strong communication and leadership skills
  • Experience with data mesh architectures and domain-driven data design
  • Knowledge of metadata management, data cataloging, and lineage tracking tools
  • Shape large-scale data architecture vision and roadmap across client engagements
  • Establish governance, security frameworks, and regulatory compliance standards
  • Lead strategy around platform selection, integration, and scaling
  • Guide organizations in adopting data lakehouse and federated data models
  • Lead technical discovery sessions to understand client needs
  • Translate complex architectures into clear, actionable value for stakeholders
  • Build trusted advisor relationships and guide strategic decisions
  • Align architecture recommendations with business growth and goals
  • Design and implement modern data lakehouse architectures with Delta Lake and Databricks
  • Build and manage ETL/ELT pipelines at scale using Spark (PySpark preferred)
  • Leverage Delta Live Tables, Unity Catalog, and schema evolution features
  • Optimize storage and queries on cloud object storage (e.g., AWS S3, Azure Data Lake)
  • Integrate with cloud-native services like AWS Glue, GCP Dataflow, and Azure Synapse Analytics
  • Implement data quality monitoring, lineage tracking, and schema versioning
  • Build scalable pipelines with tools like Apache Airflow, Step Functions, and Cloud Composer
  • Develop cost-optimized, scalable, and compliant data solutions
  • Design POCs and pilots to validate technical approaches
  • Translate business requirements into production-ready data systems
  • Define and track success metrics for platform and pipeline initiatives

AWSPythonSQLCloud ComputingETLGCPKafkaAirflowAzureData engineeringScalaData modeling

Posted about 8 hours ago
Apply
Shown 10 out of 803

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why Remote JavaScript Jobs Are Becoming More Popular

The remote work from home is increasingly in demand among computer and IT professionals for several reasons:

  • Flexibility in time and location.
  • Collaboration with international companies.
  • Higher salary levels.
  • Lack of ties to the office.

Remote work opens up new opportunities for specialists, allowing them to go beyond geographical limits and build a successful remote IT career. This employment model is transforming traditional work approaches, making it more convenient, efficient, and accessible for professionals worldwide.

Why do Job Seekers Choose Remoote.app?

Our platform offers convenient conditions for finding remote IT jobs from home:

  • localized search — filter job listings based on your country of residence;
  • AI-powered job processing — artificial intelligence analyzes thousands of listings, highlighting key details so you don’t have to read long descriptions;
  • advanced filters — sort vacancies by skills, experience, qualification level, and work model;
  • regular database updates — we monitor job relevance and remove outdated listings;
  • personalized notifications — get tailored job offers directly via email or Telegram;
  • resume builder — create a professional VC with ease using our customizable templates and AI-powered suggestions;
  • data security — modern encryption technologies ensure the protection of your personal information.

Join our platform and find your dream job today! We offer flexible pricing — up to 5 applications per day for free, with weekly, monthly, and yearly subscription plans for extended access.