Remote Working

Remote working from home provides convenience and freedom, a lifestyle embraced by millions of people around the world. With our platform, finding the right job, whether full-time or part-time, becomes quick and easy thanks to AI, precise filters, and daily updates. Sign up now and start your online career today β€” fast and easy!

Remote IT JobsRemote Job Salaries
Apache Airflow
204 jobs found. to receive daily emails with new job openings that match your preferences.
204 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

πŸ“ SΓ£o Paulo, Rio Grande do Sul, Rio de Janeiro, Belo Horizonte

🏒 Company: TELUS Digital Brazil

  • Relevant experience in data science/ machine learning development
  • Bachelor and/or Master degree in Statistics, Mathematics, Physics, Computer Science or other related subject.
  • Proficiency in Python.
  • Experience with machine learning frameworks (like Keras, TensorFlow, or PyTorch) and libraries (like scikit-learn).
  • Strong understanding of machine learning algorithms, processes, tools and platforms.
  • Ability to work on distributed cloud platforms
  • Work on all aspects of the design, development and delivery of machine learning enabled solutions for our clients;
  • Come up with solutions to loosely defined business problems by leveraging pattern detection over complex datasets across multiple platforms;
  • Establish efficient, automated processes for large-scale modeling;
  • Become part of a highly skilled and geographically distributed team;
  • Collaborate with peers on problem definitions, data acquisition, data exploration, visualization as well as building the tools that support this process;
  • Strong sense of ownership, urgency and drive.

AWSBackend DevelopmentDockerPythonSoftware DevelopmentSQLApache AirflowCloud ComputingData AnalysisData MiningFlaskKubernetesMachine LearningMLFlowNumpyAlgorithmsData engineeringData scienceREST APIPandasCI/CDAgile methodologiesRESTful APIsDevOpsMicroservicesJSONData visualizationData modeling

Posted about 2 hours ago
Apply
Apply

πŸ“ United States of America

πŸ’Έ 330000.0 - 400000.0 USD per year

πŸ” Cybersecurity

🏒 Company: crowdstrikecareers

  • Proven experience in leading data platform teams in a fast-paced and petabyte scale environment.
  • Proven experience in creating services with high availability, data durability and performance.
  • Experience building machine learning platforms, real time analytics and graph systems.
  • Strong technical skills in data architecture, ETL processes, database management, and cloud technologies.
  • Excellent communication skills with the ability to collaborate effectively with stakeholders at all levels.
  • Strong leadership abilities with a track record of building high-performing teams.
  • Craft a comprehensive data strategy roadmap.
  • Oversee the design and management of a scalable data infrastructure.
  • Drive and uphold a culture of continuous improvement and operational excellence.
  • Implement best practices for data governance, security, and compliance.
  • Collaborate with cross-functional teams to identify business requirements and ensure data platform solutions meet those needs.
  • Drive innovation by exploring new technologies and tools to enhance the data platform capabilities.
  • Monitor performance metrics, identify areas for improvement, and implement solutions to optimize data platform efficiency.
  • Provide guidance and mentorship to team members, fostering a culture of continuous learning and development.

Apache AirflowCloud ComputingCybersecurityETLMachine LearningApache KafkaData engineeringNosqlData visualizationData modeling

Posted about 3 hours ago
Apply
Apply
πŸ”₯ Data Engineer (Remote)
Posted about 6 hours ago

πŸ“ Rio de Janeiro, BR, SΓ£o Paulo, BR, Buenos Aires, AR, Mexico City, MX

🧭 %Label_position_type_contract%

πŸ’Έ 4850.0 - 5250.0 USD per month

πŸ” Software Development

🏒 Company: Blue Orange DigitalπŸ‘₯ 101-250πŸ’° $699,999 Corporate about 3 years agoCloud Data ServicesArtificial Intelligence (AI)Big DataPredictive AnalyticsData IntegrationMachine LearningAnalyticsData VisualizationSoftware

  • BA/BS degree in Computer Science or a related technical field, or equivalent practical experience.
  • At least 2 years experience building and supporting data platforms; exposure to data technologies like Azure Data Factory, Azure Synapse Analytics, Airflow, Spark, etc.
  • Experience with Cloud Data Platforms, like Snowflake and Databricks.
  • Advanced level Python, SQL, and Bash scripting.
  • Experience designing and building robust CI/CD pipelines.
  • Strong Linux system administration skills.
  • Comfortable with Docker, configuration management, and monitoring tools.
  • Knowledge of best practices related to security, performance, and disaster recovery.
  • Experience working in cloud environments, at a minimum experience in Azure and AWS.
  • Enjoys collaborating with other engineers on architecture and sharing designs with the team.
  • Excellent verbal and written English communication.
  • Interacts with others using sound judgment, good humor, and consistent fairness in a fast-paced environment.
  • Work with data teams to help design, build, and deploy data platforms in the cloud (Azure, AWS, GCP) and automate their operation.
  • Work with Azure DevOps, Azure Pipelines, Terraform, CloudFormation, and other Automation and infrastructure tools to build robust systems.
  • Work with Databricks, Spark, Python other data orchestration, and ETL tools to build high-performance data pipelines.
  • Provide leadership in applying software development principles and best practices, including Continuous Integration, Continuous Delivery/Deployment, and managing Infrastructure as Code and automated Testing across multiple software applications.
  • Support heterogeneous technologies environments including both Windows and Linux systems.
  • Develop reusable, automated processes, and custom tools.
  • Any other duties as directed by your direct manager.

AWSDockerPythonSQLApache AirflowBashCloud ComputingETLSnowflakeAzureData engineeringSparkCI/CDLinuxDevOpsTerraformData modelingScripting

Posted about 6 hours ago
Apply
Apply
πŸ”₯ Senior Data Engineer 3
Posted about 7 hours ago

πŸ“ Colombia

🧭 Full-Time

πŸ” Software Development

  • 5+ years of experience in developing scalable data pipeline infrastructure, preferably for sales organizations
  • Proven track record of delivering large-scale data projects and working with business partners
  • Experience with big data processing frameworks such as Apache Spark
  • Experience with data orchestration tools like Airflow or Dagster
  • Experience with infrastructure-as-code tools (e.g., Terraform) and modern CI/CD pipelines
  • Collaborate with other engineers, business partners, and data scientists to build best-in-class data infrastructure that meets evolving needs
  • Design and implement scalable data pipelines that integrate Salesforce and other sales systems data into our enterprise data lake
  • Build automated solutions for sales data quality, enrichment, and standardization
  • Create and maintain data models that power sales analytics, forecasting, and reporting systems
  • Design and manage reverse ETL pipelines to power sales operations and marketing automation
  • Partner with AI/ML engineers to develop Sales predictive and generative models
  • Architect solutions for real-time sales data synchronization and processing
  • Optimize data flows between Salesforce, Snowflake, AWS Athena, and other enterprise systems
  • Build robust monitoring and alerting systems for sales data pipelines
  • Collaborate with Sales Operations to automate manual processes and improve data accuracy
  • Create documentation and enable self-service capabilities for sales teams

AWSPythonApache AirflowSalesforceSnowflakeData engineeringCI/CDTerraformData modeling

Posted about 7 hours ago
Apply
Apply
πŸ”₯ Data Engineer
Posted about 19 hours ago

πŸ“ United States

πŸ’Έ 97000.0 - 153595.0 USD per year

πŸ” Healthcare

🏒 Company: healthfirst

  • Work experience in a data engineering
  • Work experience in data programing languages such as Java or Python
  • Work experience in a Big Data ecosystem processing data including file systems, data structures/databases, automation, security, messaging, movement, etc.
  • Work experience in a production cloud infrastructure
  • Bachelor’s Degree in computer engineering or related field (preferred)
  • Hands on experience in leading healthcare data transformation initiatives from on-premise to cloud deployment (preferred)
  • Demonstrated experience working in an Agile environment as a Data Engineer (preferred)
  • Hands on work with Amazon Web Services, including creating Redshift data structures, accessing them with Spectrum and storing data in S3 (preferred)
  • Proven results using an analytical perspective to identify engineering patterns within complex strategies and ideas, and break them down into engineered code components (preferred)
  • Knowledge of provider-sponsored health insurance systems/processes and the Healthcare industry (preferred)
  • Experience developing, prototyping, and testing engineered processes, products, or services (preferred)
  • Proficiency with relational, graph and noSQL databases; expertise in SQL (preferred)
  • Demonstrates critical thinking skills with ability to problem solve (preferred)
  • Excellent interpersonal skills with proven ability to influence with impact across functions and disciplines (preferred)
  • Skilled in Microsoft Office including Project, PowerPoint, Word, Excel and Visio (preferred)
  • Finds trends in datasets and develops workflows and algorithms to make raw data useful to the enterprise
  • Designs and implements standardized data management procedures around data staging, data ingestion, data preparation, data provisioning, and data destruction (scripts, programs, automation, assisted by automation, etc.)
  • Ensures quality of technical solutions as data moves across Healthfirst environments
  • Provides insight into the changing data environment, data processing, data storage and utilization requirements for the company and offer suggestions for solutions
  • Ensures managed analytic assets support Healthfirst’s strategic goals by creating and verifying data acquisition requirements and strategy
  • Develops, constructs, tests and maintains architectures
  • Aligns architecture with business requirements and uses programming language and tools
  • Identifies ways to improve data reliability, efficiency, and quality
  • Conducts research for industry and business questions
  • Deploys sophisticated analytics programs, machine learning and statistical methods
  • Prepares data for predictive and prescriptive modeling and finds hidden patterns using data
  • Uses data to discover tasks that can be automated
  • Creates data monitoring capabilities for each business process and works with data consumers on updates
  • Aligns data architecture to Healthfirst solution architecture; contributes to overall solution architecture
  • Helps maintain the integrity and security of the company data
  • Additional duties as assigned or required

AWSPythonSQLAgileApache AirflowCloud ComputingETLJavaJava EEAmazon Web ServicesData engineeringData StructuresNosqlData visualizationData modelingScriptingData analyticsData management

Posted about 19 hours ago
Apply
Apply

πŸ“ Ireland

🧭 Full-Time

πŸ” Fintech

🏒 Company: HopperπŸ‘₯ 501-1000πŸ’° $96,000,000 over 2 years agoπŸ«‚ Last layoff 6 months agoBig DataPredictive AnalyticsAppsMobile AppsTravel

  • Strong development skills in Python, Scala, SQL.
  • Deep understanding of ML algorithms and frameworks like pandas, sklearn, flyte, TensorFlow etc
  • Familiarity with data modeling, software architecture and distributed data processing tools.
  • Implementing automated, reusable ML training pipelines.
  • Build data ETL pipelines including appropriate feature engineering
  • Develop and deploy ML real-time pricing solutions to production.
  • Monitor and optimize for low-latency and minimal training/serving skew.
  • Collaborate with data scientists, engineers and product stakeholders to define and implement relevant solutions.

PythonSQLApache AirflowETLMachine LearningSoftware ArchitectureData engineeringData StructuresREST APIPandasTensorflowScalaData modeling

Posted 1 day ago
Apply
Apply

πŸ“ Canada

🧭 Full-Time

πŸ” Fintech

🏒 Company: HopperπŸ‘₯ 501-1000πŸ’° $96,000,000 over 2 years agoπŸ«‚ Last layoff 6 months agoBig DataPredictive AnalyticsAppsMobile AppsTravel

  • Strong development skills in Python, Scala, SQL.
  • Deep understanding of ML algorithms and frameworks like pandas, sklearn, flyte, TensorFlow etc
  • Familiarity with data modeling, software architecture and distributed data processing tools.
  • Strong analytical and problem solving skills, with attention to detail.
  • Excellent communication and collaboration skills.
  • Implementing automated, reusable ML training pipelines.
  • Build data ETL pipelines including appropriate feature engineering
  • Develop and deploy ML real-time pricing solutions to production.
  • Monitor and optimize for low-latency and minimal training/serving skew.
  • Collaborate with data scientists, engineers and product stakeholders to define and implement relevant solutions.

PythonSQLApache AirflowETLMachine LearningSoftware ArchitectureREST APIPandasTensorflowScalaData modeling

Posted 1 day ago
Apply
Apply

πŸ“ United States

πŸ’Έ 215000.0 - 307100.0 USD per year

πŸ” Software Development

🏒 Company: Veeam SoftwareπŸ‘₯ 5001-10000πŸ’° $2,000,000,000 Secondary Market 6 months agoπŸ«‚ Last layoff over 1 year agoVirtualizationData ManagementData CenterEnterprise SoftwareSoftwareCloud Infrastructure

  • 10+ years of experience as a software engineer, bulk of it in building and deploying production data pipelines at scale in the cloud (preferably AWS), using orchestration tools like Apache Airflow.
  • Proven experience in designing and scaling highly-performant data pipelines leveraging AWS services such as S3, Athena, Lambda, and RDS.
  • Fluency in a modern programming language relevant to data engineering (e.g.,Python).
  • Strong experience working with relational databases (e.g., PostgreSQL, MySQL, etc) and data warehousing solutions (e.g., AWS Athena).
  • Experience in leading other engineers in designing and implementing solutions for building and managing data pipelines and data infrastructure.
  • Experience running data pipelines in multiple cloud environments is highly desirable.
  • Designing and implementing scalable ETL/ELT data pipelines handling large volumes of data to support global expansion of our patent-pending forensics products that have been used by thousands of customers.
  • Driving the expansion and management of our data processing infrastructure across multiple AWS regions.
  • Analyzing current data architecture for security, scalability, performance, and data quality, and implementing appropriate solutions.
  • Developing and deploying serverless data processing solutions using AWS Lambda, orchestrated using Apache Airflow.
  • Designing and optimizing data architecture within relational databases (e.g., AWS RDS) and data warehouses (e.g., AWS Athena).
  • Collaboration with cross-functional teams, including the product manager, UX designer, and other engineers to understand data processing requirements and translate them into technical solutions.
  • Writing clean, maintainable, and well-documented code following best practices and coding standards, with data quality and security top of mind.
  • Implementing observability and alerting for data pipelines to proactively identify and resolve issues.
  • Conducting code and data pipeline reviews, provide constructive feedback, and mentor peers to ensure data quality and continuous improvement.

AWSLeadershipPostgreSQLPythonSoftware DevelopmentSQLAmazon RDSApache AirflowCloud ComputingData AnalysisElasticSearchETLGitSoftware ArchitectureAmazon Web ServicesData engineeringServerlessCI/CDProblem SolvingRESTful APIsMentoringWritten communicationMicroservicesData visualizationData modelingData management

Posted 1 day ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 160000.0 - 230000.0 USD per year

πŸ” Daily Fantasy Sports

🏒 Company: PrizePicksπŸ‘₯ 101-250πŸ’° Corporate about 2 years agoGamingFantasy SportsSports

  • 7+ years of experience in a data Engineering, or data-oriented software engineering role creating and pushing end-to-end data engineering pipelines.
  • 3+ years of experience acting as technical lead and providing mentorship and feedback to junior engineers.
  • Extensive experience building and optimizing cloud-based data streaming pipelines and infrastructure.
  • Extensive experience exposing real-time predictive model outputs to production-grade systems leveraging large-scale distributed data processing and model training.
  • Experience in most of the following:
  • Excellent organizational, communication, presentation, and collaboration experience with organizational technical and non-technical teams
  • Graduate degree in Computer Science, Mathematics, Informatics, Information Systems or other quantitative field
  • Enhance the capabilities of our existing Core Data Platform and develop new integrations with both internal and external APIs within the Data organization.
  • Develop and maintain advanced data pipelines and transformation logic using Python and Go, ensuring efficient and reliable data processing.
  • Collaborate with Data Scientists and Data Science Engineers to support the needs of advanced ML development.
  • Collaborate with Analytics Engineers to enhance data transformation processes, streamline CI/CD pipelines, and optimize team collaboration workflows Using DBT.
  • Work closely with DevOps and Infrastructure teams to ensure the maturity and success of the Core Data platform.
  • Guide teams in implementing and maintaining comprehensive monitoring, alerting, and documentation practices, and coordinate with Engineering teams to ensure continuous feature availability.
  • Design and implement Infrastructure as Code (IaC) solutions to automate and streamline data infrastructure deployment, ensuring scalable, consistent configurations aligned with data engineering best practices.
  • Build and maintain CI/CD pipelines to automate the deployment of data solutions, ensuring robust testing, seamless integration, and adherence to best practices in version control, automation, and quality assurance.
  • Experienced in designing and automating data governance workflows and tool integrations across complex environments, ensuring data integrity and protection throughout the data lifecycle
  • Serve as a Staff Engineer within the broader PrizePicks technology organization by staying current with emerging technologies, implementing innovative solutions, and sharing knowledge and best practices with junior team members and collaborators.
  • Ensure code is thoroughly tested, effectively integrated, and efficiently deployed, in alignment with industry best practices for version control, automation, and quality assurance.
  • Mentor and support junior engineers by providing guidance, coaching and educational opportunities
  • Provide on-call support as part of a shared rotation between the Data and Analytics Engineering teams to maintain system reliability and respond to critical issues.

AWSBackend DevelopmentDockerLeadershipPythonSQLApache AirflowCloud ComputingETLGitKafkaKubernetesRabbitmqAlgorithmsApache KafkaData engineeringData StructuresGoPostgresSparkCommunication SkillsAnalytical SkillsCollaborationCI/CDProblem SolvingRESTful APIsMentoringLinuxDevOpsTerraformExcellent communication skillsStrong communication skillsData visualizationData modelingScriptingSoftware EngineeringData analyticsData management

Posted 2 days ago
Apply
Apply

πŸ“ Finland, Sweden, Lisbon, the UK, and the US

🧭 Full-Time

πŸ” Retail

🏒 Company: RELEX SolutionsπŸ‘₯ 501-1000πŸ’° over 1 year agoSoftware Development

  • 3-5 years of experience in applied data science, from ideation to production deployment and operations
  • Deep understanding of stochastic systems
  • Strong problem-solving skills, with the ability to translate business challenges into solutions that work in real life in a hyper-large environment with extremely high SLA requirements
  • Proficiency in Python, Azure, Snowflake
  • Proven track record of continuous learning and growth
  • Strong creative and independent thinking, questioning status quo
  • Excellent communication, business understanding, and teamwork skills.
  • T-Shaped competency profile: generalist in many areas, but also deep understanding in some relevant areas
  • Develop and deploy solutions to solve real-world replenishment challenges.
  • Work with large datasets and simulations to create scalable and trustworthy solutions that deliver measurable business value.
  • Collaborate with customers and other RELEX teams to identify opportunities and shape the R&D roadmap.
  • Communicate results and complex technical concepts effectively to stakeholders at all levels.
  • Proactively innovate new research ideas to be added to the Data Science roadmap
  • Actively collaborate with other teams in our department to find synergies and ways to make our product holistically better

PythonSQLApache AirflowData AnalysisMachine LearningNumpySnowflakeAlgorithmsAzureData engineeringData scienceData StructuresPandasCommunication SkillsProblem SolvingRESTful APIsData visualizationData modeling

Posted 2 days ago
Apply
Shown 10 out of 204

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why do Job Seekers Choose Our Platform for Remote Work Opportunities?

We’ve developed a well-thought-out service for home job matching, making the searching process easier and more efficient.

AI-powered Job Processing and Advanced Filters

Our algorithms process thousands of offers postings daily, extracting only the key information from each listing. This allows you to skip lengthy texts and focus only on the offers that match your requirements.

With powerful skill filters, you can specify your core competencies to instantly receive a selection of job opportunities that align with your experience. 

Search by Country of Residence

For those looking for fully remote jobs in their own country, our platform offers the ability to customize the search based on your location. This is especially useful if you want to adhere to local laws, consider time zones, or work with employers familiar with local specifics.

If necessary, you can also work remotely with employers from other countries without being limited by geographical boundaries.

Regular Data Update

Our platform features over 40,000 remote work offers with full-time or part-time positions from 7,000 companies. This wide range ensures you can find offers that suit your preferences, whether from startups or large corporations.

We regularly verify the validity of vacancy listings and automatically remove outdated or filled positions, ensuring that you only see active and relevant opportunities.

Job Alerts

Once you register, you can set up convenient notification methods, such as receiving tailored job listings directly to your email or via Telegram. This ensures you never miss out on a great opportunity.

Our job board allows you to apply for up to 5 vacancies per day absolutely for free. If you wish to apply for more, you can choose a suitable subscription plan with weekly, monthly, or annual payments.

Wide Range of Completely Remote Online Jobs

On our platform, you'll find fully remote work positions in the following fields:

  • IT and Programming β€” software development, website creation, mobile app development, system administration, testing, and support.
  • Design and Creative β€” graphic design, UX/UI design, video content creation, animation, 3D modeling, and illustrations.
  • Marketing and Sales β€” digital marketing, SMM, contextual advertising, SEO, product management, sales, and customer service.
  • Education and Online Tutoring β€” teaching foreign languages, school and university subjects, exam preparation, training, and coaching.
  • Content β€” creating written content for websites, blogs, and social media; translation, editing, and proofreading.
  • Administrative Roles (Assistants, Operators) β€” Virtual assistants, work organization support, calendar management, and document workflow assistance.
  • Finance and Accounting β€” bookkeeping, reporting, financial consulting, and taxes.

Other careers include: online consulting, market research, project management, and technical support.

All Types of Employment

The platform offers online remote jobs with different types of work:

  • Full-time β€” the ideal choice for those who value stability and predictability;
  • part-time β€” perfect for those looking for a side home job or seeking a balance between work and personal life;
  • Contract β€” suited for professionals who want to work on projects for a set period.
  • Temporary β€” short-term work that can be either full-time or part-time. These positions are often offered for seasonal or urgent tasks;
  • Internship β€” a form of on-the-job training that allows you to gain practical experience in your chosen field.

Whether you're looking for stable full-time employment, the flexibility of freelancing, or a part-time side gig, you'll find plenty of options on Remoote.app.

Remote Working Opportunities for All Expertise Levels

We feature offers for people with all levels of expertise:

  • for beginners β€” ideal positions for those just starting their journey in internet working from home;
  • for intermediate specialists β€” if you already have experience, you can explore positions requiring specific skills and knowledge in your field;
  • for experts β€” roles for highly skilled professionals ready to tackle complex tasks.

How to Start Your Online Job Search Through Our Platform?

To begin searching for home job opportunities, follow these three steps:

  1. Register and complete your profile. This process takes minimal time.
  2. Specify your skills, country of residence, and the preferable position.
  3. Receive notifications about new vacancy openings and apply to suitable ones.

If you don't have a resume yet, use our online builder. It will help you create a professional document, highlighting your key skills and achievements. The AI will automatically optimize it to match job requirements, increasing your chances of a successful response. You can update your profile information at any time: modify your skills, add new preferences, or upload an updated resume.