Remote Working

Remote working from home provides convenience and freedom, a lifestyle embraced by millions of people around the world. With our platform, finding the right job, whether full-time or part-time, becomes quick and easy thanks to AI, precise filters, and daily updates. Sign up now and start your online career today β€” fast and easy!

Remote IT JobsRemote Job Salaries
Kafka
432 jobs found. to receive daily emails with new job openings that match your preferences.
432 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

πŸ“ Australia

🏒 Company: vernova_externalsite

  • Experience managing enterprise customers in Distributed Energy Resources Management Systems (DERMS).
  • Experience of IT software support including Kubernetes & Kafka preferred.
  • Bachelors in Electrical Engineering / Computer Science
  • Provides account management for Premier Support and Escalated accounts.
  • Develop and drive action plans to accelerate issue resolution.
  • Maintain customer communication, and advocate for the customer to ensure successful implementation and operation of GE Digital software solutions.
  • Develop specialized knowledge in their discipline.
  • Serve as best practice/quality resource.

Project ManagementSQLData AnalysisKafkaKubernetesCommunication SkillsCustomer serviceAccount ManagementTechnical supportCustomer Success

Posted about 10 hours ago
Apply
Apply

πŸ“ Australia

πŸ” Utilities

🏒 Company: vernova_externalsite

  • 10+ years in enterprise or solution architecture in the utilities, OT, or critical infrastructure domain.
  • Strong hands-on knowledge of: Kubernetes (RKE2/AKS), Istio, Helm
  • Experience working in regulated industries and aligning with compliance and cybersecurity standards (ISO27001, NIST, Australian Signals Directorate)
  • Architect scalable, secure, and cloud-native GridOS solutions across DERMS and ADMS platforms.
  • Define end-to-end technical architecture for control systems, telemetry pipelines, and DER integration using Kafka, ActiveMQ, Helm, Istio, MinIO, and PostgreSQL.
  • Lead technical solutioning in pre-sales (ITO) efforts; provide expert architectural input into Statements of Work (SOWs), estimates, and delivery assumptions.

LeadershipPostgreSQLSQLCloud ComputingCybersecurityGitKafkaKubernetesLDAPMicrosoft AzureMicrosoft Power BISoftware ArchitectureActiveMQAPI testingAzureREST APICommunication SkillsCI/CDProblem SolvingMicrosoft OfficeAgile methodologiesMentoringLinuxDevOpsTerraformCompliance

Posted about 10 hours ago
Apply
Apply

πŸ“ Poland

πŸ’Έ 22900.0 - 29900.0 PLN per month

πŸ” Threat Intelligence

  • Bachelor’s degree in Computer Science, Engineering, or a related field.
  • 2+ years of experience as a Data Engineer, working with large-scale distributed systems.
  • Proven expertise in Lakehouse architecture and Apache Hudi in production environments.
  • Experience with Airflow, Kafka, or streaming data pipelines.
  • Strong programming skills in Python and PySpark.
  • Comfortable working in a cloud-based environment (preferably AWS).
  • Design, build, and manage scalable data pipelines using Python, SQL, and PySpark.
  • Develop and maintain lakehouse architectures, with hands-on use of Apache Hudi for data versioning, upserts, and compaction.
  • Implement efficient ETL/ELT processes for both batch and real-time data ingestion.
  • Optimize data storage and query performance across large datasets (partitioning, indexing, compaction).
  • Ensure data quality, governance, and lineage, integrating validation and monitoring into pipelines.
  • Work with cloud-native services (preferably AWS – S3, Athena, EMR) to support modern data workflows.
  • Collaborate closely with data scientists, analysts, and platform engineers to deliver reliable data infrastructure

AWSPythonSQLCloud ComputingETLKafkaAirflowData engineeringCI/CDTerraform

Posted about 11 hours ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” Information Security

  • 5+ years of experience in security engineering, with a primary focus on SIEM platforms.
  • Hands-on experience with at least two of the following SIEM platforms: Splunk, Microsoft Sentinel, Elastic, Google SecOps, CrowdStrike NG-SIEM, LogScale
  • 2+ years of experience with Cribl or similar observability pipeline tools (e.g., Logstash, Fluentd, Kafka).
  • Strong knowledge of log formats, data normalization, and event correlation.
  • Familiarity with detection engineering, threat modeling, and MITRE ATT&CK framework.
  • Proficiency with scripting (e.g., Python, PowerShell, Bash) and regular expressions.
  • Deep understanding of logging from cloud (AWS, Azure, GCP) and on-prem environments.
  • Architect, implement, and maintain SIEM solutions with a focus on modern platforms
  • Design and manage log ingestion pipelines using tools such as Cribl Stream, Edge, or Search (or similar).
  • Optimize data routing, enrichment, and filtering to improve SIEM efficiency and cost control.
  • Collaborate with cybersecurity, DevOps, and cloud infrastructure teams to integrate log sources and telemetry data.
  • Develop custom parsers, dashboards, correlation rules, and alerting logic for security analytics and threat detection.
  • Maintain and enhance system reliability, scalability, and performance of logging infrastructure.
  • Provide expertise and guidance on log normalization, storage strategy, and data retention policies.
  • Lead incident response investigations and assist with root cause analysis leveraging SIEM insights.
  • Mentor junior engineers and contribute to strategic security monitoring initiatives.

AWSPythonBashCloud ComputingGCPKafkaKubernetesAPI testingAzureData engineeringCI/CDRESTful APIsLinuxDevOpsJSONAnsibleScripting

Posted about 16 hours ago
Apply
Apply
πŸ”₯ Span - Product Engineer
Posted about 16 hours ago

πŸ“ Argentina

🧭 Full-Time

πŸ” Software Development

🏒 Company: Silver.dev

  • Resourceful individuals who thrive in a high agency environment.
  • Have been a founder, seed-stage engineering hire, or have launched your own project before.
  • Have strong product sense.
  • Have experience working with LLMs and agentic workflows.
  • Help with projects that root cause KTLO work and recommend solutions.
  • Develop a software catalog.
  • Help protect engineering focus time by systemically solving sources of distraction or mental load with AI.

AWSBackend DevelopmentPostgreSQLKafkaKubernetesTypeScriptRedisNest.jsReactSoftware Engineering

Posted about 16 hours ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ’Έ 155000.0 - 255000.0 USD per year

πŸ” Cybersecurity

🏒 Company: crowdstrikecareers

  • 10+ years of experience in software development, with a focus on cloud-native architectures and distributed systems.
  • Expert-level proficiency in at least one modern programming language such as Go (preferred), Python, Java, or C#.
  • Demonstrated experience in designing and implementing large-scale, high-performance data processing systems.
  • Strong understanding of security concepts, threat detection methodologies, and UEBA principles.
  • Proven track record of leading complex technical projects and delivering results on schedule.
  • Experience with cloud platforms (preferably AWS) and containerization technologies like Docker and Kubernetes.
  • Excellent communication skills, with the ability to explain complex technical concepts to both technical and non-technical audiences.
  • A collaborative mindset and the ability to work effectively across teams and disciplines.
  • Lead the design and development of cloud-native microservices for our Next-Gen SIEM platform, focusing on detections and UEBA capabilities that can process and analyze trillions of events per day.
  • Take end-to-end ownership of complex, high-impact projects across multiple teams, driving technical decisions and providing architectural guidance using collaborative tools like Miro.
  • Partner with cross-functional teams to define, design, and implement solutions that enhance threat detection and analysis capabilities. Contribute to the medium-term strategic and technical direction by identifying areas of greatest need, and creating plans for improvement.
  • Utilize and integrate technologies such as Go, Kafka, Redis, OpenSearch, PostgreSQL, and more to build robust, scalable solutions.
  • Optimize and scale existing systems for improved stability, performance, and reliability across business-critical infrastructure, using monitoring tools like Grafana to track and analyze system metrics.
  • Mentor junior engineers through pair programming, code reviews, and knowledge sharing, fostering a culture of technical excellence. Additionally, participate in the interview process and coach/mentor new interviewers to maintain high hiring standards.
  • Champion software engineering best practices to ensure high-quality deliverables, including robust testing strategies, effective code reviews, comprehensive documentation, continuous integration/deployment, and adherence to architectural principles that promote scalability and maintainability.
  • Participate in and lead technical working groups that influence the broader Product team or industry.
  • Provide monitoring and operational support for production services, including participating in an on-call rotation for one week approximately every 10-12 weeks.
  • Be given the autonomy to own your work in a high trust environment, managing tasks and priorities effectively using Jira.

AWSBackend DevelopmentDockerPostgreSQLSoftware DevelopmentCloud ComputingCybersecurityKafkaKubernetesGoRedisCI/CDRESTful APIsDevOpsMicroservices

Posted about 17 hours ago
Apply
Apply

πŸ“ United Kingdom, Ireland

πŸ” Cybersecurity

🏒 Company: crowdstrikecareers

  • Degree in Computer Science (or commensurate experience in data structures/algorithms/distributed systems).
  • The ability to scale backend systems – sharding, partitioning, scaling horizontally are second nature to you.
  • Solid understanding of distributed systems and scalability challenges.
  • A thorough understanding of engineering best practices from appropriate testing paradigms to effective peer code reviews and resilient architecture.
  • The ability to thrive in a fast paced, test-driven, collaborative and iterative programming environment.
  • The skills to meet your commitments on time and produce high quality software that is unit tested, code reviewed, and checked in regularly for continuous integration.
  • Design, develop, document, test, deploy, maintain, and enhance large scale services.
  • Take ownership and be accountable for deliverables.
  • Triage system issues and debug by analyzing the sources of issues and the impact on service operations.
  • Mentor other engineers on web and backend engineers on use of ourΒ  feature services.
  • Constantly re-evaluate our products to improve architecture, testing coverage, knowledge models, user experience, performance, observability and stability.
  • Partner with product teams in understanding their needs, work with PM to document the new requirements, and implement those new features within our feature services

AWSBackend DevelopmentPythonSoftware DevelopmentGitKafkaKubernetesAlgorithmsAPI testingCassandraData StructuresGoPostgresRedisCI/CDRESTful APIsLinuxDevOpsMicroservicesSoftware Engineering

Posted about 17 hours ago
Apply
Apply
πŸ”₯ Data Engineer (Contract)
Posted about 19 hours ago

πŸ“ LatAm

🧭 Contract

🏒 Company: AbleRentalProperty ManagementReal Estate

  • 10+ years of data engineering experience with enterprise-scale systems
  • Expertise in Apache Spark and Delta Lake, including ACID transactions, time travel, Z-ordering, and compaction
  • Deep knowledge of Databricks (Jobs, Clusters, Workspaces, Delta Live Tables, Unity Catalog)
  • Experience building scalable ETL/ELT pipelines using tools like Airflow, Glue, Dataflow, or ADF
  • Advanced SQL for data modeling and transformation
  • Strong programming skills in Python (or Scala)
  • Hands-on experience with data formats such as Parquet, Avro, and JSON
  • Familiarity with schema evolution, versioning, and backfilling strategies
  • Working knowledge of at least one major cloud platform: AWS (S3, Athena, Redshift, Glue Catalog, Step Functions), GCP (BigQuery, Cloud Storage, Dataflow, Pub/Sub), or Azure (Synapse, Data Factory, Azure Databricks)
  • Experience designing data architectures with real-time or streaming data (Kafka, Kinesis)
  • Consulting or client-facing experience with strong communication and leadership skills
  • Experience with data mesh architectures and domain-driven data design
  • Knowledge of metadata management, data cataloging, and lineage tracking tools
  • Shape large-scale data architecture vision and roadmap across client engagements
  • Establish governance, security frameworks, and regulatory compliance standards
  • Lead strategy around platform selection, integration, and scaling
  • Guide organizations in adopting data lakehouse and federated data models
  • Lead technical discovery sessions to understand client needs
  • Translate complex architectures into clear, actionable value for stakeholders
  • Build trusted advisor relationships and guide strategic decisions
  • Align architecture recommendations with business growth and goals
  • Design and implement modern data lakehouse architectures with Delta Lake and Databricks
  • Build and manage ETL/ELT pipelines at scale using Spark (PySpark preferred)
  • Leverage Delta Live Tables, Unity Catalog, and schema evolution features
  • Optimize storage and queries on cloud object storage (e.g., AWS S3, Azure Data Lake)
  • Integrate with cloud-native services like AWS Glue, GCP Dataflow, and Azure Synapse Analytics
  • Implement data quality monitoring, lineage tracking, and schema versioning
  • Build scalable pipelines with tools like Apache Airflow, Step Functions, and Cloud Composer
  • Develop cost-optimized, scalable, and compliant data solutions
  • Design POCs and pilots to validate technical approaches
  • Translate business requirements into production-ready data systems
  • Define and track success metrics for platform and pipeline initiatives

AWSPythonSQLCloud ComputingETLGCPKafkaAirflowAzureData engineeringScalaData modeling

Posted about 19 hours ago
Apply
Apply

πŸ“ Mexico

🧭 Full-Time

πŸ” Software Development

🏒 Company: Varicent

  • 5+ years of experience in modern web development with React, Node.js, TypeScript, and JavaScript.
  • Strong background with microservices, cloud architecture, and serverless development (preferably AWS).
  • Solid knowledge of SQL, NoSQL, and API design (REST/GraphQL).
  • Experience in automated testing, CI/CD, and agile delivery environments.
  • Lead the design and development of new features using the MERN stack.
  • Build cloud-native apps with AWS Lambda, Aurora, DynamoDB, ECS, and GraphQL.
  • Improve system scalability, performance, and architecture.
  • Write clean, efficient, and maintainable code with CI/CD and test automation.
  • Collaborate closely with product and design to deliver intuitive user experiences.
  • Guide junior developers through code reviews and technical mentoring.

AWSBackend DevelopmentDockerGraphQLNode.jsSoftware DevelopmentSQLAgileDynamoDBExpress.jsFrontend DevelopmentJavascriptKafkaReact.jsTypeScriptAPI testingREST APIReduxServerlessTestRailReactCommunication SkillsCI/CDProblem SolvingMentoringMicroservicesTechnical supportSoftware EngineeringEnglish communication

Posted about 21 hours ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” Software Development

🏒 Company: Buzz Solutions

  • 8+ years of industry experience with modern systems development, ideally end to end pipelines and applications development
  • Track record of shipping complex backend features end-to-end
  • Ability to translate customer requirements into technical solutions
  • Strong programming and computer science fundamentals and quality standards
  • Experience with Python and modern web frameworks (FastAPI) and Pydantic
  • Experience designing, implementing, and debugging web technologies and server architecture
  • Experience with modern python packaging and distribution (uv, poetry)
  • Deep understanding of distributed systems and scalable architecture
  • Experience building reusable, modular systems that enable rapid development and easy modification
  • Strong experience with data storage systems (PostgreSQL, Redis, BigQuery, MongoDB)
  • Expertise with queuing/streaming systems (RabbitMQ, Kafka, SQS)
  • Expertise with workflow orchestration frameworks (Celery, Temporal, Airflow) and DAG-based processing
  • Proficiency in utilizing and maintaining cloud infrastructure services (Google Cloud/AWS/Azure)
  • Experience with Kubernetes for container orchestration and deployment
  • Solid grasp of system design patterns and tradeoffs
  • Experience and in-depth understanding of AI/ML systems integration
  • Deep understanding of the ML Lifecyle
  • Experience with big data technologies and data pipeline development
  • Experience containerizing and deploying ML applications (Docker) for training and inference workloads
  • Experience with real-time streaming and batch processing systems for ML model workflows
  • Experience with vector databases and search systems for similarity search and embeddings
  • Partner closely with engineering (software, data, and machine learning), product, and design leadership to define product-led growth strategy with an ownership-driven approach
  • Establish best practices, frameworks, and repeatable processes to measure the impact of every feature shipped, taking initiative to identify and solve problems proactively
  • Make effective tradeoffs considering business priorities, user experience, and sustainable technical foundation with a startup mindset focused on rapid iteration and results
  • Develop and lead team execution against both short-term and long-term roadmaps, demonstrating self-starter qualities and end-to-end accountability
  • Mentor and grow team members to be successful contributors while fostering an ownership culture and entrepreneurial thinking
  • Build and maintain backend systems and data pipelines for AI-based software platforms, integrating SQL/NoSQL databases and collaborating with engineering teams to enhance performance
  • Design, deploy, and optimize cloud infrastructure on Google Cloud Platform, including Kubernetes clusters, virtual machines, and cost-effective scalable architecture
  • Implement comprehensive MLOps workflows including model registry, deployment pipelines, monitoring systems for model drift, and CI/CD automation for ML-based backend services
  • Establish robust testing, monitoring, and security frameworks including unit/stress testing, vulnerability assessments, and customer usage analytics
  • Drive technical excellence through documentation, code reviews, standardized practices, and strategic technology stack recommendations

AWSBackend DevelopmentDockerPostgreSQLPythonSQLCloud ComputingGCPKafkaKubernetesMachine LearningMLFlowMongoDBRabbitmqAirflowFastAPIRedisNosqlCI/CDRESTful APIsMicroservices

Posted about 21 hours ago
Apply
Shown 10 out of 432

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why do Job Seekers Choose Our Platform for Remote Work Opportunities?

We’ve developed a well-thought-out service for home job matching, making the searching process easier and more efficient.

AI-powered Job Processing and Advanced Filters

Our algorithms process thousands of offers postings daily, extracting only the key information from each listing. This allows you to skip lengthy texts and focus only on the offers that match your requirements.

With powerful skill filters, you can specify your core competencies to instantly receive a selection of job opportunities that align with your experience. 

Search by Country of Residence

For those looking for fully remote jobs in their own country, our platform offers the ability to customize the search based on your location. This is especially useful if you want to adhere to local laws, consider time zones, or work with employers familiar with local specifics.

If necessary, you can also work remotely with employers from other countries without being limited by geographical boundaries.

Regular Data Update

Our platform features over 40,000 remote work offers with full-time or part-time positions from 7,000 companies. This wide range ensures you can find offers that suit your preferences, whether from startups or large corporations.

We regularly verify the validity of vacancy listings and automatically remove outdated or filled positions, ensuring that you only see active and relevant opportunities.

Job Alerts

Once you register, you can set up convenient notification methods, such as receiving tailored job listings directly to your email or via Telegram. This ensures you never miss out on a great opportunity.

Our job board allows you to apply for up to 5 vacancies per day absolutely for free. If you wish to apply for more, you can choose a suitable subscription plan with weekly, monthly, or annual payments.

Wide Range of Completely Remote Online Jobs

On our platform, you'll find fully remote work positions in the following fields:

  • IT and Programming β€” software development, website creation, mobile app development, system administration, testing, and support.
  • Design and Creative β€” graphic design, UX/UI design, video content creation, animation, 3D modeling, and illustrations.
  • Marketing and Sales β€” digital marketing, SMM, contextual advertising, SEO, product management, sales, and customer service.
  • Education and Online Tutoring β€” teaching foreign languages, school and university subjects, exam preparation, training, and coaching.
  • Content β€” creating written content for websites, blogs, and social media; translation, editing, and proofreading.
  • Administrative Roles (Assistants, Operators) β€” Virtual assistants, work organization support, calendar management, and document workflow assistance.
  • Finance and Accounting β€” bookkeeping, reporting, financial consulting, and taxes.

Other careers include: online consulting, market research, project management, and technical support.

All Types of Employment

The platform offers online remote jobs with different types of work:

  • Full-time β€” the ideal choice for those who value stability and predictability;
  • part-time β€” perfect for those looking for a side home job or seeking a balance between work and personal life;
  • Contract β€” suited for professionals who want to work on projects for a set period.
  • Temporary β€” short-term work that can be either full-time or part-time. These positions are often offered for seasonal or urgent tasks;
  • Internship β€” a form of on-the-job training that allows you to gain practical experience in your chosen field.

Whether you're looking for stable full-time employment, the flexibility of freelancing, or a part-time side gig, you'll find plenty of options on Remoote.app.

Remote Working Opportunities for All Expertise Levels

We feature offers for people with all levels of expertise:

  • for beginners β€” ideal positions for those just starting their journey in internet working from home;
  • for intermediate specialists β€” if you already have experience, you can explore positions requiring specific skills and knowledge in your field;
  • for experts β€” roles for highly skilled professionals ready to tackle complex tasks.

How to Start Your Online Job Search Through Our Platform?

To begin searching for home job opportunities, follow these three steps:

  1. Register and complete your profile. This process takes minimal time.
  2. Specify your skills, country of residence, and the preferable position.
  3. Receive notifications about new vacancy openings and apply to suitable ones.

If you don't have a resume yet, use our online builder. It will help you create a professional document, highlighting your key skills and achievements. The AI will automatically optimize it to match job requirements, increasing your chances of a successful response. You can update your profile information at any time: modify your skills, add new preferences, or upload an updated resume.