Full-Stack Developer Jobs

Kafka
433 jobs found. to receive daily emails with new job openings that match your preferences.
433 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

📍 Europe

🧭 Full-Time

🔍 Software Development

🏢 Company: Kraken👥 1001-5000💰 Secondary Market over 1 year ago🫂 Last layoff 7 months agoEthereumBlockchainBitcoinFinTechTrading Platform

  • 3+ years of experience in software engineering
  • Proficiency in writing clean, scalable Typescript/node.js backend code
  • Demonstrated commitment to a security-first mindset when designing systems
  • Capability to autonomously debug issues across the stack, including OS, network, and application layers
  • Familiarity with distributed systems and technologies, including RPC protocols, Kafka, and Event Driven Systems
  • Design and implement robust services and libraries for payments integration across our products at Kraken
  • Write reusable, testable, and highly efficient codebase
  • Collaborate with cross-functional teams, including Product, Design, and Fronted Engineering to ensure seamless integration of new features and improvements in a large scale distributed systems architecture

Backend DevelopmentNode.jsKafkaTypeScriptREST APICI/CDSoftware EngineeringDebugging

Posted about 10 hours ago
Apply
Apply

📍 Australia

🏢 Company: vernova_externalsite

  • Experience managing enterprise customers in Distributed Energy Resources Management Systems (DERMS).
  • Experience in Energy Utility Control Systems EMS/DMS or Asset Management Systems Geospatial Information Systems (GIS) will be considered.
  • Experience of IT software support including Kubernetes & Kafka preferred.
  • Provides account management for Premier Support and Escalated accounts.
  • Develops and drives action plans to accelerate issue resolution.
  • Maintains customer communication.
  • Advocates for the customer to ensure successful implementation and operation of GE Digital software solutions.
  • May be utilised directly on support cases where relevant.

AWSProject ManagementSQLData AnalysisKafkaKubernetesRESTful APIsAccount ManagementDigital MarketingTechnical supportCustomer Success

Posted about 10 hours ago
Apply
Apply

📍 Australia

🔍 Utilities

🏢 Company: vernova_externalsite

  • 10+ years in enterprise or solution architecture in the utilities, OT, or critical infrastructure domain.
  • Strong hands-on knowledge of: Kubernetes (RKE2/AKS), Istio, Helm
  • Experience working in regulated industries and aligning with compliance and cybersecurity standards (ISO27001, NIST, Australian Signals Directorate)
  • Architect scalable, secure, and cloud-native GridOS solutions across DERMS and ADMS platforms.
  • Define end-to-end technical architecture for control systems, telemetry pipelines, and DER integration using Kafka, ActiveMQ, Helm, Istio, MinIO, and PostgreSQL.
  • Lead technical solutioning in pre-sales (ITO) efforts; provide expert architectural input into Statements of Work (SOWs), estimates, and delivery assumptions.

LeadershipPostgreSQLSQLCloud ComputingCybersecurityGitKafkaKubernetesLDAPMicrosoft AzureMicrosoft Power BISoftware ArchitectureActiveMQAPI testingAzureREST APICommunication SkillsCI/CDProblem SolvingMicrosoft OfficeAgile methodologiesMentoringLinuxDevOpsTerraformCompliance

Posted 1 day ago
Apply
Apply

📍 Poland

💸 22900.0 - 29900.0 PLN per month

🔍 Threat Intelligence

  • Bachelor’s degree in Computer Science, Engineering, or a related field.
  • 2+ years of experience as a Data Engineer, working with large-scale distributed systems.
  • Proven expertise in Lakehouse architecture and Apache Hudi in production environments.
  • Experience with Airflow, Kafka, or streaming data pipelines.
  • Strong programming skills in Python and PySpark.
  • Comfortable working in a cloud-based environment (preferably AWS).
  • Design, build, and manage scalable data pipelines using Python, SQL, and PySpark.
  • Develop and maintain lakehouse architectures, with hands-on use of Apache Hudi for data versioning, upserts, and compaction.
  • Implement efficient ETL/ELT processes for both batch and real-time data ingestion.
  • Optimize data storage and query performance across large datasets (partitioning, indexing, compaction).
  • Ensure data quality, governance, and lineage, integrating validation and monitoring into pipelines.
  • Work with cloud-native services (preferably AWS – S3, Athena, EMR) to support modern data workflows.
  • Collaborate closely with data scientists, analysts, and platform engineers to deliver reliable data infrastructure

AWSPythonSQLCloud ComputingETLKafkaAirflowData engineeringCI/CDTerraform

Posted 1 day ago
Apply
Apply

📍 United States

🧭 Full-Time

🔍 Information Security

  • 5+ years of experience in security engineering, with a primary focus on SIEM platforms.
  • Hands-on experience with at least two of the following SIEM platforms: Splunk, Microsoft Sentinel, Elastic, Google SecOps, CrowdStrike NG-SIEM, LogScale
  • 2+ years of experience with Cribl or similar observability pipeline tools (e.g., Logstash, Fluentd, Kafka).
  • Strong knowledge of log formats, data normalization, and event correlation.
  • Familiarity with detection engineering, threat modeling, and MITRE ATT&CK framework.
  • Proficiency with scripting (e.g., Python, PowerShell, Bash) and regular expressions.
  • Deep understanding of logging from cloud (AWS, Azure, GCP) and on-prem environments.
  • Architect, implement, and maintain SIEM solutions with a focus on modern platforms
  • Design and manage log ingestion pipelines using tools such as Cribl Stream, Edge, or Search (or similar).
  • Optimize data routing, enrichment, and filtering to improve SIEM efficiency and cost control.
  • Collaborate with cybersecurity, DevOps, and cloud infrastructure teams to integrate log sources and telemetry data.
  • Develop custom parsers, dashboards, correlation rules, and alerting logic for security analytics and threat detection.
  • Maintain and enhance system reliability, scalability, and performance of logging infrastructure.
  • Provide expertise and guidance on log normalization, storage strategy, and data retention policies.
  • Lead incident response investigations and assist with root cause analysis leveraging SIEM insights.
  • Mentor junior engineers and contribute to strategic security monitoring initiatives.

AWSPythonBashCloud ComputingGCPKafkaKubernetesAPI testingAzureData engineeringCI/CDRESTful APIsLinuxDevOpsJSONAnsibleScripting

Posted 1 day ago
Apply
Apply

📍 Argentina

🧭 Full-Time

🔍 Software Development

🏢 Company: Silver.dev

  • Resourceful individuals who thrive in a high agency environment.
  • Have been a founder, seed-stage engineering hire, or have launched your own project before.
  • Have strong product sense.
  • Have experience working with LLMs and agentic workflows.
  • Help with projects that root cause KTLO work and recommend solutions.
  • Develop a software catalog.
  • Help protect engineering focus time by systemically solving sources of distraction or mental load with AI.

AWSBackend DevelopmentPostgreSQLKafkaKubernetesTypeScriptRedisNest.jsReactSoftware Engineering

Posted 1 day ago
Apply
Apply

📍 United States, Canada

🧭 Full-Time

💸 155000.0 - 255000.0 USD per year

🔍 Cybersecurity

🏢 Company: crowdstrikecareers

  • 10+ years of experience in software development, with a focus on cloud-native architectures and distributed systems.
  • Expert-level proficiency in at least one modern programming language such as Go (preferred), Python, Java, or C#.
  • Demonstrated experience in designing and implementing large-scale, high-performance data processing systems.
  • Strong understanding of security concepts, threat detection methodologies, and UEBA principles.
  • Proven track record of leading complex technical projects and delivering results on schedule.
  • Experience with cloud platforms (preferably AWS) and containerization technologies like Docker and Kubernetes.
  • Excellent communication skills, with the ability to explain complex technical concepts to both technical and non-technical audiences.
  • A collaborative mindset and the ability to work effectively across teams and disciplines.
  • Lead the design and development of cloud-native microservices for our Next-Gen SIEM platform, focusing on detections and UEBA capabilities that can process and analyze trillions of events per day.
  • Take end-to-end ownership of complex, high-impact projects across multiple teams, driving technical decisions and providing architectural guidance using collaborative tools like Miro.
  • Partner with cross-functional teams to define, design, and implement solutions that enhance threat detection and analysis capabilities. Contribute to the medium-term strategic and technical direction by identifying areas of greatest need, and creating plans for improvement.
  • Utilize and integrate technologies such as Go, Kafka, Redis, OpenSearch, PostgreSQL, and more to build robust, scalable solutions.
  • Optimize and scale existing systems for improved stability, performance, and reliability across business-critical infrastructure, using monitoring tools like Grafana to track and analyze system metrics.
  • Mentor junior engineers through pair programming, code reviews, and knowledge sharing, fostering a culture of technical excellence. Additionally, participate in the interview process and coach/mentor new interviewers to maintain high hiring standards.
  • Champion software engineering best practices to ensure high-quality deliverables, including robust testing strategies, effective code reviews, comprehensive documentation, continuous integration/deployment, and adherence to architectural principles that promote scalability and maintainability.
  • Participate in and lead technical working groups that influence the broader Product team or industry.
  • Provide monitoring and operational support for production services, including participating in an on-call rotation for one week approximately every 10-12 weeks.
  • Be given the autonomy to own your work in a high trust environment, managing tasks and priorities effectively using Jira.

AWSBackend DevelopmentDockerPostgreSQLSoftware DevelopmentCloud ComputingCybersecurityKafkaKubernetesGoRedisCI/CDRESTful APIsDevOpsMicroservices

Posted 1 day ago
Apply
Apply

📍 United Kingdom, Ireland

🔍 Cybersecurity

🏢 Company: crowdstrikecareers

  • Degree in Computer Science (or commensurate experience in data structures/algorithms/distributed systems).
  • The ability to scale backend systems – sharding, partitioning, scaling horizontally are second nature to you.
  • Solid understanding of distributed systems and scalability challenges.
  • A thorough understanding of engineering best practices from appropriate testing paradigms to effective peer code reviews and resilient architecture.
  • The ability to thrive in a fast paced, test-driven, collaborative and iterative programming environment.
  • The skills to meet your commitments on time and produce high quality software that is unit tested, code reviewed, and checked in regularly for continuous integration.
  • Design, develop, document, test, deploy, maintain, and enhance large scale services.
  • Take ownership and be accountable for deliverables.
  • Triage system issues and debug by analyzing the sources of issues and the impact on service operations.
  • Mentor other engineers on web and backend engineers on use of our  feature services.
  • Constantly re-evaluate our products to improve architecture, testing coverage, knowledge models, user experience, performance, observability and stability.
  • Partner with product teams in understanding their needs, work with PM to document the new requirements, and implement those new features within our feature services

AWSBackend DevelopmentPythonSoftware DevelopmentGitKafkaKubernetesAlgorithmsAPI testingCassandraData StructuresGoPostgresRedisCI/CDRESTful APIsLinuxDevOpsMicroservicesSoftware Engineering

Posted 1 day ago
Apply
Apply

📍 LatAm

🧭 Contract

🏢 Company: AbleRentalProperty ManagementReal Estate

  • 10+ years of data engineering experience with enterprise-scale systems
  • Expertise in Apache Spark and Delta Lake, including ACID transactions, time travel, Z-ordering, and compaction
  • Deep knowledge of Databricks (Jobs, Clusters, Workspaces, Delta Live Tables, Unity Catalog)
  • Experience building scalable ETL/ELT pipelines using tools like Airflow, Glue, Dataflow, or ADF
  • Advanced SQL for data modeling and transformation
  • Strong programming skills in Python (or Scala)
  • Hands-on experience with data formats such as Parquet, Avro, and JSON
  • Familiarity with schema evolution, versioning, and backfilling strategies
  • Working knowledge of at least one major cloud platform: AWS (S3, Athena, Redshift, Glue Catalog, Step Functions), GCP (BigQuery, Cloud Storage, Dataflow, Pub/Sub), or Azure (Synapse, Data Factory, Azure Databricks)
  • Experience designing data architectures with real-time or streaming data (Kafka, Kinesis)
  • Consulting or client-facing experience with strong communication and leadership skills
  • Experience with data mesh architectures and domain-driven data design
  • Knowledge of metadata management, data cataloging, and lineage tracking tools
  • Shape large-scale data architecture vision and roadmap across client engagements
  • Establish governance, security frameworks, and regulatory compliance standards
  • Lead strategy around platform selection, integration, and scaling
  • Guide organizations in adopting data lakehouse and federated data models
  • Lead technical discovery sessions to understand client needs
  • Translate complex architectures into clear, actionable value for stakeholders
  • Build trusted advisor relationships and guide strategic decisions
  • Align architecture recommendations with business growth and goals
  • Design and implement modern data lakehouse architectures with Delta Lake and Databricks
  • Build and manage ETL/ELT pipelines at scale using Spark (PySpark preferred)
  • Leverage Delta Live Tables, Unity Catalog, and schema evolution features
  • Optimize storage and queries on cloud object storage (e.g., AWS S3, Azure Data Lake)
  • Integrate with cloud-native services like AWS Glue, GCP Dataflow, and Azure Synapse Analytics
  • Implement data quality monitoring, lineage tracking, and schema versioning
  • Build scalable pipelines with tools like Apache Airflow, Step Functions, and Cloud Composer
  • Develop cost-optimized, scalable, and compliant data solutions
  • Design POCs and pilots to validate technical approaches
  • Translate business requirements into production-ready data systems
  • Define and track success metrics for platform and pipeline initiatives

AWSPythonSQLCloud ComputingETLGCPKafkaAirflowAzureData engineeringScalaData modeling

Posted 1 day ago
Apply
Apply

📍 Mexico

🧭 Full-Time

🔍 Software Development

🏢 Company: Varicent

  • 5+ years of experience in modern web development with React, Node.js, TypeScript, and JavaScript.
  • Strong background with microservices, cloud architecture, and serverless development (preferably AWS).
  • Solid knowledge of SQL, NoSQL, and API design (REST/GraphQL).
  • Experience in automated testing, CI/CD, and agile delivery environments.
  • Lead the design and development of new features using the MERN stack.
  • Build cloud-native apps with AWS Lambda, Aurora, DynamoDB, ECS, and GraphQL.
  • Improve system scalability, performance, and architecture.
  • Write clean, efficient, and maintainable code with CI/CD and test automation.
  • Collaborate closely with product and design to deliver intuitive user experiences.
  • Guide junior developers through code reviews and technical mentoring.

AWSBackend DevelopmentDockerGraphQLNode.jsSoftware DevelopmentSQLAgileDynamoDBExpress.jsFrontend DevelopmentJavascriptKafkaReact.jsTypeScriptAPI testingREST APIReduxServerlessTestRailReactCommunication SkillsCI/CDProblem SolvingMentoringMicroservicesTechnical supportSoftware EngineeringEnglish communication

Posted 1 day ago
Apply
Shown 10 out of 433

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why Full-Stack Developer Jobs Are Becoming More Popular

The remote work from home is increasingly in demand among computer and IT professionals for several reasons:

  • Flexibility in time and location.
  • Collaboration with international companies.
  • Higher salary levels.
  • Lack of ties to the office.

Remote work opens up new opportunities for specialists, allowing them to go beyond geographical limits and build a successful remote IT career. This employment model is transforming traditional work approaches, making it more convenient, efficient, and accessible for professionals worldwide.

Why do Job Seekers Choose Remoote.app?

Our platform offers convenient conditions for finding remote IT jobs from home:

  • localized search — filter job listings based on your country of residence;
  • AI-powered job processing — artificial intelligence analyzes thousands of listings, highlighting key details so you don’t have to read long descriptions;
  • advanced filters — sort vacancies by skills, experience, qualification level, and work model;
  • regular database updates — we monitor job relevance and remove outdated listings;
  • personalized notifications — get tailored job offers directly via email or Telegram;
  • resume builder — create a professional VC with ease using our customizable templates and AI-powered suggestions;
  • data security — modern encryption technologies ensure the protection of your personal information.

Join our platform and find your dream job today! We offer flexible pricing — up to 5 applications per day for free, with weekly, monthly, and yearly subscription plans for extended access.