Remote Working

Remote working from home provides convenience and freedom, a lifestyle embraced by millions of people around the world. With our platform, finding the right job, whether full-time or part-time, becomes quick and easy thanks to AI, precise filters, and daily updates. Sign up now and start your online career today β€” fast and easy!

Remote IT Jobs
Apache Airflow
138 jobs found. to receive daily emails with new job openings that match your preferences.
138 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

πŸ“ Canada

πŸ’Έ 98400.0 - 137800.0 CAD per year

πŸ” Software Development

  • A degree in Computer Science or Engineering, and 5-8 years of experience in developing and maintaining software or an equivalent level of education or work experience, and a track record of substantial contributions to software projects with high business impact
  • Experience writing clean code that performs well at scale; ideally experienced with languages like Python, Scala, Java, Go, and shell script
  • Passionate interest in data engineering and infrastructure; ingestion, storage and compute in relational, NoSQL, and serverless architectures
  • Experience with various types of data stores, query engines and frameworks, e.g. PostgreSQL, MySQL, S3, Redshift/Spectrum, Presto/Athena, Spark
  • Experience working with message queues such as Kafka and Kinesis
  • Experience developing data pipelines and integrations for high volume, velocity and variety of data
  • Experience with data warehousing and data modeling best practices
  • Work within a cross-functional team (including analysts, product managers, and other developers) to deliver data products and services to our internal stakeholders
  • Conduct directed research and technical analysis of new candidate technologies that fill a development team’s business or technical need
  • Provide technical advice, act as a role model for your teammates, flawlessly execute complicated plans, and navigate many levels of the organization
  • Contribute enhancements to development, build, deployment, and monitoring processes with an emphasis on security, reliability and performance
  • Implement our technical roadmap as we scale our services and build new data products
  • Participate in code reviews, attend regular team meetings, and apply software development best practices
  • Take ownership of your work, and work autonomously when necessary
  • Recognize opportunities to improve efficiency in our data systems and processes, increase data quality, and enable consistent and reliable results
  • Participate in the design and implementation of our next generation data platform to empower Hootsuite with data
  • Participate in the development of the technical hiring process and interview scripts with an aim of attracting and hiring the best developers

AWSPostgreSQLPythonSoftware DevelopmentSQLAgileApache AirflowCloud ComputingData AnalysisData MiningETLJavaKafkaMySQLSoftware ArchitectureAlgorithmsAPI testingData engineeringData StructuresGoServerlessSparkCI/CDRESTful APIsMicroservicesScalaData visualizationData modelingData management

Posted 13 minutes ago
Apply
Apply

πŸ“ United States of America

🧭 Full-Time

πŸ” Biotechnology, Pharmaceutical, Healthcare

🏒 Company: clarivate_careers

  • PhD (or equivalent work experience) in Bioinformatics, Computational or Systems Biology, Statistics, Machine Learning, Computer Science (latter two as applied to solving problems in areas of biology or medicine) or a related field.
  • A minimum of five years of relevant work experience in bioinformatics or computational biology in professional setting, preferably in a biotechnology, pharmaceutical, or healthcare setting.
  • Experience working in a customer-oriented consulting environment: an ability to earn customer trust through effective communication, efficiency, integrity, and deep technical expertise.
  • Proficiency in statistical programming languages (R), Unix/Linux shell with a strong understanding of statistics and biological data analysis.
  • Conduct data analysis and synthesis to support project objectives using established methods and advanced analytical tools to generate actionable insights.
  • Assist in designing and executing research methodologies to address business challenges and provide recommendations.
  • Collaborate with consulting teams in problem-solving and the development of innovative solutions tailored to customer needs and strategic goals.
  • Contribute to the development of client presentations and reports by organizing data, visualizing findings, and ensuring accuracy, relevance and clarity in all communications.
  • Engage in customer interactions and meetings, with oversight, to understand customer needs, gather feedback, and ensure alignment with project objectives.
  • Support proposal development by contributing to specific elements of the proposal.
  • Collaborate with senior consultants on project execution, coordinating tasks, managing timelines, and ensuring the successful delivery of high-quality consulting services.

AWSDockerPythonSQLApache AirflowApache HadoopCloud ComputingData AnalysisData MiningETLFlaskImage ProcessingKafkaKubernetesMachine LearningNumpyPyTorchAlgorithmsData engineeringData scienceData StructuresRDBMSNosqlPandasTensorflowCI/CDRESTful APIsTerraformJSONData visualizationAnsibleData modelingData analyticsData management

Posted about 5 hours ago
Apply
Apply
πŸ”₯ Senior Data Analyst
Posted about 21 hours ago

πŸ“ Ireland, United States (AZ, CA, CO, CT, FL, GA, IL, IN, IA, KS, MA, MI, MN, MO, NE, NJ, NY, NC, PA, SD, TN, TX, UT, WA, WI)

πŸ” Data Analysis

🏒 Company: SojernπŸ‘₯ 501-1000πŸ’° $9,842,000 over 1 year agoπŸ«‚ Last layoff over 1 year agoDigital MarketingAdvertising PlatformsSaaSTravel

  • 3+ years of experience in a data-driven role, preferably in analytics, data engineering, or business intelligence.
  • Strong proficiency in SQL (writing complex queries, working with relational databases).
  • Exposure to building ETL pipelines
  • Familiarity with BI tools (Tableau, Power BI, Looker, etc.) and the ability to create meaningful data visualizations.
  • Ability to work in a fast-paced environment, manage multiple priorities, and communicate insights effectively to both technical and non-technical stakeholders.
  • Work closely with Sales, Finance, Product, and Marketing teams to understand business needs and translate them into data solutions.
  • Analyze large datasets to identify trends, performance metrics, and actionable insights that support strategic business initiatives.
  • Develop detailed reports and dashboards using Tableau to help stakeholders make data-driven decisions.
  • Collaborate with data scientists and operations analysts to support testing and analysis on campaigns.
  • Build, optimize, and maintain ETL pipelines to process and transform raw data into structured, meaningful datasets.
  • Work with SQL, Python, and BigQuery to extract, clean, and manipulate data from multiple sources.
  • Ensure data quality, integrity, and consistency across different data sources.

PythonSQLApache AirflowBusiness IntelligenceData AnalysisETLJenkinsTableauAlgorithmsData engineeringData StructuresREST APIReportingData entryData visualizationData modelingData analyticsData management

Posted about 21 hours ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 232000.0 - 310000.0 USD per year

πŸ” Software Development

  • 10+ years of experience designing, developing and launching backend systems at scale using languages like Python or Kotlin.
  • Strong experience leading multiple engineering teams to deliver high quality software
  • Track record of successfully leading engineering teams at both rapidly scaling startups and complex larger technology companies.
  • Expertise in synthesizing complex technical requirements, designs, trade-offs, and capabilities into clear decisions to influence ML & engineering direction
  • Extensive experience developing highly available distributed systems using technologies like AWS, MySQL, Spark and Kubernetes.
  • Experience building and operating online, real-time ML infrastructure including a model server and a feature store
  • Experience developing an offline environment for large scale data analysis and model training using technologies including Spark, Kubeflow, Ray, and Airflow
  • Experience delivering major features and system components
  • Set the multi-year, multi-team technical strategy for ML Platform and deliver it through direct implementation or broad technical leadership
  • Partner with technical leaders across the company to create joint roadmaps that will achieve business impacting goals through the advancement of machine learning
  • Act as a force-multiplier for your teams through your definition and advocacy of technical solutions and operational processes
  • You have an ownership mindset, and you will proactively champion investments in availability so that every project in your area achieves its availability targets
  • You will foster a culture of quality and ownership on your team by setting system design standards for your team, and advocating for them beyond your team through your writing and tech talks
  • You will help develop talent on your team by providing feedback and guidance, and leading by example

AWSBackend DevelopmentLeadershipProject ManagementPythonApache AirflowData AnalysisKotlinKubeflowKubernetesMachine LearningMySQLSoftware ArchitectureCross-functional Team LeadershipData engineeringSparkCommunication SkillsRESTful APIsDevOps

Posted 1 day ago
Apply
Apply

πŸ“ Argentina, Colombia, Peru, Bolivia, Plurinational State of, Mexico

🧭 Contract

πŸ’Έ 2300.0 - 2500.0 USD per month

πŸ” Software Development

🏒 Company: Workana

  • Experience with Selenium, Puppeteer or Playwright.
  • Experience with Java (Spring Boot, Jsoup, HttpClient) and Python (Scrapy, Selenium, Playwright, FastAPI).
  • Experience with pandas and NumPy.
  • Knowledge in SQL databases (PostgreSQL, MySQL, SQL Server).
  • Implementation of scrapers in scalable environments with Docker and Kubernetes.
  • Deployment in AWS or GCP.
  • Development and maintenance of scrapers for extracting data from web portals.
  • Refactorization and optimization of legacy scrapers in Java towards Python.
  • Implementation of more efficient architectures to improve performance.

AWSBackend DevelopmentDockerGraphQLPostgreSQLPythonSQLApache AirflowETLGCPJavaJava EEKubernetesMySQLNumpySpring BootData engineeringFastAPIREST APIPandasSeleniumCI/CDMicroservicesJSONData management

Posted 1 day ago
Apply
Apply

πŸ“ Lithuania

πŸ’Έ 4000.0 - 6000.0 EUR per month

πŸ” Software Development

🏒 Company: Softeta

  • 4+ years of experience as a Data Engineer
  • Experience with Azure (Certifications are a Plus)
  • Experience with Databricks, Azure Data Lake, Data Factory and Apache Airflow
  • CI/CD or infrastructure as code
  • Knowledge of Medallion Architecture or Multihop architecture
  • Experience developing and administering ETL processes in the Cloud (Azure, AWS or GCP) environment
  • Strong programming skills in Python and SQL
  • Strong problem-solving and analytical skills
  • Design, develop, and maintain data pipelines and ETL processes
  • Data modeling, data cleansing
  • Automating data processing workflows using tools such as Airflow or other workflow management tools
  • Optimizing the performance of databases, including designing and implementing data structures and using indexes appropriately
  • Implement data quality and data governance processes
  • Being a data advocate and helping unlock business value by using data

PythonSQLApache AirflowETLAzureData engineeringCI/CDData modeling

Posted 2 days ago
Apply
Apply

πŸ“ United States

πŸ’Έ 144000.0 - 180000.0 USD per year

πŸ” Software Development

🏒 Company: HungryrootπŸ‘₯ 101-250πŸ’° $40,000,000 Series C almost 4 years agoArtificial Intelligence (AI)Food and BeverageE-CommerceRetailConsumer GoodsSoftware

  • 5+ years of experience in ETL development and data modeling
  • 5+ years of experience in both Scala and Python
  • 5+ years of experience in Spark
  • Excellent problem-solving skills and the ability to translate business problems into practical solutions
  • 2+ years of experience working with the Databricks Platform
  • Develop pipelines in Spark (Python + Scala) in the Databricks Platform
  • Build cross-functional working relationships with business partners in Food Analytics, Operations, Marketing, and Web/App Development teams to power pipeline development for the business
  • Ensure system reliability and performance
  • Deploy and maintain data pipelines in production
  • Set an example of code quality, data quality, and best practices
  • Work with Analysts and Data Engineers to enable high quality self-service analytics for all of Hungryroot
  • Investigate datasets to answer business questions, ensuring data quality and business assumptions are understood before deploying a pipeline

AWSPythonSQLApache AirflowData MiningETLSnowflakeAlgorithmsAmazon Web ServicesData engineeringData StructuresSparkCI/CDRESTful APIsMicroservicesJSONScalaData visualizationData modelingData analyticsData management

Posted 2 days ago
Apply
Apply

πŸ“ India

πŸ” Software Development

🏒 Company: Apollo.ioπŸ‘₯ 501-1000πŸ’° $100,000,000 Series D over 1 year agoSoftware Development

  • 8+ years of experience building Machine Learning or AI systems
  • Experience deploying and managing machine learning models in the cloud
  • Experience working with fine tuning LLMs and prompt engineering
  • Strong analytical and problem-solving skills
  • Proven software engineering skills in production environment, primarily using Python
  • Experience with Machine Learning software tools and libraries (e.g., Scikit-learn, TensorFlow, Keras, PyTorch, etc.)
  • Design, build, evaluate, deploy and iterate on scalable Machine Learning systems
  • Understand the Machine Learning stack at Apollo and continuously improve it
  • Build systems that help Apollo personalize their users’ experience
  • Evaluate the performance of machine learning systems against business objectives
  • Develop and maintain scalable data pipelines that power our algorithms
  • Implement automated monitoring, alerting, self-healing (restartable/graceful failures) features while productionizing data & ML workflows
  • Write unit/integration tests and contribute to engineering wiki

PythonSQLApache AirflowCloud ComputingData AnalysisKerasMachine LearningMLFlowNumpyPyTorchAlgorithmsREST APITensorflowSoftware Engineering

Posted 2 days ago
Apply
Apply

πŸ“ United States

πŸ” Software Development

🏒 Company: ge_externalsite

  • Exposure to industry standard data modeling tools (e.g., ERWin, ER Studio, etc.).
  • Exposure to Extract, Transform & Load (ETL) tools like Informatica or Talend
  • Exposure to industry standard data catalog, automated data discovery and data lineage tools (e.g., Alation, Collibra, TAMR etc., )
  • Hands-on experience in programming languages like Java, Python or Scala
  • Hands-on experience in writing SQL scripts for Oracle, MySQL, PostgreSQL or HiveQL
  • Experience with Big Data / Hadoop / Spark / Hive / NoSQL database engines (i.e. Cassandra or HBase)
  • Exposure to unstructured datasets and ability to handle XML, JSON file formats
  • Work independently as well as with a team to develop and support Ingestion jobs
  • Evaluate and understand various data sources (databases, APIs, flat files etc. to determine optimal ingestion strategies
  • Develop a comprehensive data ingestion architecture, including data pipelines, data transformation logic, and data quality checks, considering scalability and performance requirements.
  • Choose appropriate data ingestion tools and frameworks based on data volume, velocity, and complexity
  • Design and build data pipelines to extract, transform, and load data from source systems to target destinations, ensuring data integrity and consistency
  • Implement data quality checks and validation mechanisms throughout the ingestion process to identify and address data issues
  • Monitor and optimize data ingestion pipelines to ensure efficient data processing and timely delivery
  • Set up monitoring systems to track data ingestion performance, identify potential bottlenecks, and trigger alerts for issues
  • Work closely with data engineers, data analysts, and business stakeholders to understand data requirements and align ingestion strategies with business objectives.
  • Build technical data dictionaries and support business glossaries to analyze the datasets
  • Perform data profiling and data analysis for source systems, manually maintained data, machine generated data and target data repositories
  • Build both logical and physical data models for both Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) solutions
  • Develop and maintain data mapping specifications based on the results of data analysis and functional requirements
  • Perform a variety of data loads & data transformations using multiple tools and technologies.
  • Build automated Extract, Transform & Load (ETL) jobs based on data mapping specifications
  • Maintain metadata structures needed for building reusable Extract, Transform & Load (ETL) components.
  • Analyze reference datasets and familiarize with Master Data Management (MDM) tools.
  • Analyze the impact of downstream systems and products
  • Derive solutions and make recommendations from deep dive data analysis.
  • Design and build Data Quality (DQ) rules needed

AWSPostgreSQLPythonSQLApache AirflowApache HadoopData AnalysisData MiningErwinETLHadoop HDFSJavaKafkaMySQLOracleSnowflakeCassandraClickhouseData engineeringData StructuresREST APINosqlSparkJSONData visualizationData modelingData analyticsData management

Posted 4 days ago
Apply
Apply

πŸ“ United States

πŸ” Medical device, pharmaceutical, clinical, or biotechnology

🏒 Company: JobgetherπŸ‘₯ 11-50πŸ’° $1,493,585 Seed about 2 years agoInternet

  • Proficiency in SQL and programming with R or Python
  • Experience with Google Cloud Platform (BigQuery, Storage, Compute Engine) is highly valued.
  • Strong problem-solving skills and the ability to work independently or as part of a team.
  • Excellent communication skills, able to convey complex statistical concepts to non-technical stakeholders.
  • Organize and merge data from diverse sources, ensuring data quality and integrity.
  • Identify and resolve bottlenecks in data processing and analysis, implementing solutions like automation and optimization.
  • Collaborate with clinical and technical teams to streamline data collection and entry processes.
  • Perform statistical analysis, data visualization, and generate reports for clinical studies and other projects.
  • Prepare summary statistics, tables, figures, and listings for presentations and publications.

PythonSQLApache AirflowData MiningETLGCPAlgorithmsData engineeringPandasSparkData visualizationData modelingData analyticsData management

Posted 4 days ago
Apply
Shown 10 out of 138

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why do Job Seekers Choose Our Platform for Remote Work Opportunities?

We’ve developed a well-thought-out service for home job matching, making the searching process easier and more efficient.

AI-powered Job Processing and Advanced Filters

Our algorithms process thousands of offers postings daily, extracting only the key information from each listing. This allows you to skip lengthy texts and focus only on the offers that match your requirements.

With powerful skill filters, you can specify your core competencies to instantly receive a selection of job opportunities that align with your experience. 

Search by Country of Residence

For those looking for fully remote jobs in their own country, our platform offers the ability to customize the search based on your location. This is especially useful if you want to adhere to local laws, consider time zones, or work with employers familiar with local specifics.

If necessary, you can also work remotely with employers from other countries without being limited by geographical boundaries.

Regular Data Update

Our platform features over 40,000 remote work offers with full-time or part-time positions from 7,000 companies. This wide range ensures you can find offers that suit your preferences, whether from startups or large corporations.

We regularly verify the validity of vacancy listings and automatically remove outdated or filled positions, ensuring that you only see active and relevant opportunities.

Job Alerts

Once you register, you can set up convenient notification methods, such as receiving tailored job listings directly to your email or via Telegram. This ensures you never miss out on a great opportunity.

Our job board allows you to apply for up to 5 vacancies per day absolutely for free. If you wish to apply for more, you can choose a suitable subscription plan with weekly, monthly, or annual payments.

Wide Range of Completely Remote Online Jobs

On our platform, you'll find fully remote work positions in the following fields:

  • IT and Programming β€” software development, website creation, mobile app development, system administration, testing, and support.
  • Design and Creative β€” graphic design, UX/UI design, video content creation, animation, 3D modeling, and illustrations.
  • Marketing and Sales β€” digital marketing, SMM, contextual advertising, SEO, product management, sales, and customer service.
  • Education and Online Tutoring β€” teaching foreign languages, school and university subjects, exam preparation, training, and coaching.
  • Content β€” creating written content for websites, blogs, and social media; translation, editing, and proofreading.
  • Administrative Roles (Assistants, Operators) β€” Virtual assistants, work organization support, calendar management, and document workflow assistance.
  • Finance and Accounting β€” bookkeeping, reporting, financial consulting, and taxes.

Other careers include: online consulting, market research, project management, and technical support.

All Types of Employment

The platform offers online remote jobs with different types of work:

  • Full-time β€” the ideal choice for those who value stability and predictability;
  • part-time β€” perfect for those looking for a side home job or seeking a balance between work and personal life;
  • Contract β€” suited for professionals who want to work on projects for a set period.
  • Temporary β€” short-term work that can be either full-time or part-time. These positions are often offered for seasonal or urgent tasks;
  • Internship β€” a form of on-the-job training that allows you to gain practical experience in your chosen field.

Whether you're looking for stable full-time employment, the flexibility of freelancing, or a part-time side gig, you'll find plenty of options on Remoote.app.

Remote Working Opportunities for All Expertise Levels

We feature offers for people with all levels of expertise:

  • for beginners β€” ideal positions for those just starting their journey in internet working from home;
  • for intermediate specialists β€” if you already have experience, you can explore positions requiring specific skills and knowledge in your field;
  • for experts β€” roles for highly skilled professionals ready to tackle complex tasks.

How to Start Your Online Job Search Through Our Platform?

To begin searching for home job opportunities, follow these three steps:

  1. Register and complete your profile. This process takes minimal time.
  2. Specify your skills, country of residence, and the preferable position.
  3. Receive notifications about new vacancy openings and apply to suitable ones.

If you don't have a resume yet, use our online builder. It will help you create a professional document, highlighting your key skills and achievements. The AI will automatically optimize it to match job requirements, increasing your chances of a successful response. You can update your profile information at any time: modify your skills, add new preferences, or upload an updated resume.