Apache Kafka Jobs

Find remote positions requiring Apache Kafka skills. Browse through opportunities where you can utilize your expertise and grow your career.

Apache Kafka
104 jobs found. to receive daily emails with new job openings that match your preferences.
104 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply
πŸ”₯ Senior Business Analyst
Posted about 15 hours ago

πŸ“ Bratislava, Kyiv

πŸ” FinTech/Banking or Energy

  • 3+ years of Business Analysis experience in either FinTech or Energy domain.
  • Strong understanding of financial systems, digital banking, payments, and regulatory frameworks or energy market analytics, renewable energy trends, and asset management.
  • Experience with big data tools and ETL platforms (AirFlow, Kafka, Postgres, MongoDB).
  • Proficiency in data modeling, BPMN, UML, ERD diagrams, and business process mapping.
  • Deep understanding of Agile/Scrum methodologies, backlog management, and requirement elicitation.
  • Familiarity with Jira, Confluence, and wireframing/mockup tools.
  • Excellent problem-solving and communication skills.
  • Fluent in English (C1 or higher)
  • Work closely with stakeholders to gather, document, and analyze business requirements, ensuring alignment with company objectives.
  • Evaluate and optimize business processes through data analysis and stakeholder feedback.
  • Develop comprehensive Business Requirement Documents (BRD), Functional Specification Documents (FSD), and User Stories.
  • Act as a bridge between business and technical teams, ensuring smooth collaboration and understanding. Cross-department collaboration.
  • Utilize data-driven insights to support decision-making and provide business intelligence reports.
  • Work with development teams to ensure that proposed solutions meet business needs and contribute to strategic goals.
  • Identify potential risks associated with business processes and propose mitigation strategies.

PostgreSQLSQLAgileApache AirflowBusiness AnalysisData AnalysisETLMongoDBSCRUMJiraApache KafkaRDBMSRisk ManagementData visualizationStakeholder managementData modelingConfluenceEnglish communication

Posted about 15 hours ago
Apply
Apply

πŸ“ China

🏒 Company: BjakπŸ‘₯ 101-250Price ComparisonInsurTechInformation Technology

  • Bachelor's, Master's, or Ph.D. in Computer Science, Artificial Intelligence, or a related field.
  • Proven experience as an AI engineer or data scientist, with a track record of leading successful AI projects.
  • Proficiency in AI and machine learning frameworks and programming languages (e.g., Python).
  • Strong expertise in data preprocessing, feature engineering, and model evaluation.
  • Excellent problem-solving and critical-thinking skills.
  • Effective leadership, communication, and team management abilities.
  • A passion for staying at the forefront of AI and machine learning advancements.
  • Lead and mentor a team of AI engineers, providing technical guidance, coaching, and fostering their growth.
  • Collaborate with product managers and stakeholders to define AI project objectives, requirements, and timelines.
  • Design, develop, and implement AI models, algorithms, and applications to solve complex business challenges.
  • Oversee the end-to-end AI model lifecycle, including data collection, preprocessing, model training, evaluation, and deployment.
  • Stay updated with the latest advancements in AI and machine learning, incorporating best practices into projects.
  • Drive data-driven decision-making through advanced analytics and visualization techniques.
  • Ensure the security, scalability, and efficiency of AI solutions.
  • Lead research efforts to explore and integrate cutting-edge AI techniques.

DockerPythonArtificial IntelligenceData AnalysisKerasMachine LearningMLFlowNumpyAlgorithmsApache KafkaAPI testingData scienceREST APIPandasSparkTensorflowCI/CDMicroservicesData visualizationData modeling

Posted 1 day ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” Software Development

  • Experience with Infrastructure as Code tools such as Terraform or CloudFormation. Ability to automate the deployment and management of data infrastructure.
  • Familiarity with Continuous Integration and Continuous Deployment (CI/CD) processes. Experience setting up and maintaining CI/CD pipelines for data applications.
  • Proficiency in software development lifecycle process. Release fast and improve incrementally.
  • Experience with tools and frameworks for ensuring data quality, such as data validation, anomaly detection, and monitoring. Ability to design systems to track and enforce data quality standards.
  • Proven experience in designing, building, and maintaining scalable data pipelines capable of processing terabytes of data daily using modern data processing frameworks (e.g., Apache Spark, Apache Kafka, Flink, Open Table Formats, modern OLAP databases).
  • Strong foundation in data architecture principles and the ability to evaluate emerging technologies.
  • Proficient in at least one modern programming language (Go, Python, Java, Rust) and SQL.
  • Design and implement both real-time and batch data processing pipelines, leveraging technologies like Apache Kafka, Apache Flink, or managed cloud streaming services to ensure scalability and resilience
  • Create data pipelines that efficiently process terabytes of data daily, leveraging data lakes and data warehouses within the AWS cloud. Must be proficient with technologies like Apache Spark to handle large-scale data processing.
  • Implement robust schema management practices and lay the groundwork for future data contracts. Ensure pipeline integrity by establishing and enforcing data quality checks, improving overall data reliability and consistency
  • Develop tools to support rapid development of data products. Provide recommended patterns to support data pipeline deployments.
  • Designing, implementing, and maintaining data governance frameworks and best practices to ensure data quality, security, compliance, and accessibility across the organization.
  • Develop tools to support the rapid development of data products and establish recommended patterns for data pipeline deployments. Mentor and guide junior engineers, fostering their growth in best practices and efficient development processes.
  • Collaborate with the DevOps team to integrate data needs into DevOps tooling.
  • Champion DataOps practices within the organization, promoting a culture of collaboration, automation, and continuous improvement in data engineering processes.
  • Stay abreast of emerging technologies, tools and trends in data processing and analytics, and evaluate their potential impact and relevance to Fetch’s strategy.

AWSPythonSQLETLJavaApache KafkaData engineeringGoRustCI/CDDevOpsTerraformData visualizationData modelingData analyticsData management

Posted 1 day ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 117800.0 - 214300.0 USD per year

πŸ” Software Development

🏒 Company: careers_gm

  • 7+ years of hands-on experience.
  • Bachelor's degree (or equivalent work experience) in Computer Science, Data Science, Software Engineering, or a related field.
  • Strong understanding and ability to provide mentorship in the areas of data ETL processes and tools for designing and managing data pipelines
  • Proficient with big data frameworks and tools like Apache Hadoop, Apache Spark, or Apache Kafka for processing and analyzing large datasets.
  • Hands on experience with data serialization formats like JSON, Parquet and XML
  • Consistently models and leads in best practices and optimization for scripting skills in languages like Python, Java, Scala, etc for automation and data processing.
  • Proficient with database administration and performance tuning for databases like MySQL, PostgresSQL or NoSQL databases
  • Proficient with containerization (e.g., Docker) and orchestration platforms (e.g., Kubernetes) for managing data applications.
  • Experience with cloud platforms and data services for data storage and processing
  • Consistently designs solutions and build data solutions that are highly automated, performant, with quality checks that provide data consistency and accuracy outcomes
  • Experienced at actively managing large-scale data engineering projects, including planning, resource allocation, risk management, and ensuring successful project delivery and adjust style for all delivery methods (ie: Waterfall, Agile, POD, etc)
  • Understands data governance principles, data privacy regulations, and experience implementing security measures to protect data
  • Able to integrate data engineering pipelines with machine learning models and platforms
  • Strong problem-solving skills to identify and resolve complex data engineering issues efficiently.
  • Ability to work effectively in cross-functional teams, collaborate with data scientists, analysts, and stakeholders to deliver data solutions.
  • Ability to lead and mentor junior data engineers, providing guidance and support in complex data engineering projects.
  • Influential communication skills to effectively convey technical concepts to non-technical stakeholders and document data engineering processes.
  • Models a mindset of continuous learning, staying updated with the latest advancements in data engineering technologies, and a drive for innovation.
  • Design, construct, install and maintain data architectures, including database and large-scale processing systems.
  • Develop and maintain ETL (Extract, Transform, Load) processes to collect, cleanse and transform data from various sources inclusive of cloud.
  • Design and implement data pipelines to collect, process and transfer data from various sources to storage systems (data warehouses, data lakes, etc)
  • Implement security measures to protect sensitive data and ensure compliance with data privacy regulations.
  • Build data solutions that ensure data quality, integrity and security through data validation, monitoring, and compliance with data governance policies
  • Administer and optimize databases for performance and scalability
  • Maintain Master Data, Metadata, Data Management Repositories, Logical Data Models, and Data Standards
  • Troubleshoot and resolve data-related issues affecting data quality fidelity
  • Document data architectures, processes and best practices for knowledge sharing across the GM data engineering community
  • Participate in the evaluation and selection of data related tools and technologies
  • Collaborate across other engineering functions within EDAI, Marketing Technology, and Software & Services

AWSDockerPostgreSQLPythonSQLApache HadoopCloud ComputingData AnalysisETLJavaKubernetesMySQLAlgorithmsApache KafkaData engineeringData scienceData StructuresREST APINosqlCI/CDProblem SolvingJSONScalaData visualizationData modelingScriptingData analyticsData management

Posted 4 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 160000.0 - 220000.0 USDC per year

πŸ” Software Development

🏒 Company: OrcaπŸ‘₯ 11-50πŸ’° $18,000,000 Series A over 3 years agoCryptocurrencyBlockchainOnline PortalsInformation Technology

  • Experience with Rust is a plus but not required
  • Strong interest and proficiency in building high performance backend systems
  • Understanding of DeFi
  • Design and Implement Scalable Servers in Rust
  • Develop Data Processing Pipelines
  • Advance Application Features
  • Manage Database Interactions

AWSBackend DevelopmentPostgreSQLSQLBlockchainTypeScriptAlgorithmsApache KafkaAPI testingData StructuresgRPCPostgresREST APIServerlessNext.jsRustWeb3.jsCI/CDTerraformMicroservicesData modelingNodeJSSoftware Engineering

Posted 5 days ago
Apply
Apply

πŸ“ Ireland

πŸ” Ad tech

🏒 Company: eyeoπŸ‘₯ 51-100InternetOpen SourcePrivacySoftwareBrowser Extensions

  • Experience translating data strategy into scalable, fault-tolerant architectures
  • Familiarity with different approaches to data architecture: warehouse, lake, mesh, batch vs streaming, ETL vs ELT, etc., and how they can be leveraged in different use cases
  • Experience in Python and common data libraries and platforms such as Airflow, Pandas, PySpark, etc
  • Experience with cloud services (ideally Google Cloud), including managing infrastructure with Terraform
  • Expertise with advanced SQL queries and query optimization, ideally in BigQuery
  • Passion about introducing engineering best practices, for instance to ensure testability, data quality and completeness, etc
  • Design and build data platforms, endpoints and pipelines that meet business requirements and enable stakeholders to make more data-driven decisions, without losing track of technical quality and maintainability
  • Actively collaborate with teams across all of eyeo (like browser extension developers, data analysts and legal counsels) to design data collection systems that are compliant with regulations and respect user privacy
  • Manage software from proof-on-concept to deployment, to operation, and finally deprecation; and manage data through ingestion, access, schema changes and deletion
  • Improve both our software and data lifecycle processes
  • Implement strategies to ensure that data is accurate, complete, timely and consistent
  • Identify, design and implement process improvements: automating manual processes, simplifying collaboration with data analysts, etc
  • Contribute to ongoing management of our platforms, including performance monitoring, troubleshooting and resolution of technical issues
  • Be a multiplier in your team, encouraging debate, and helping to create an environment that fosters learning and growth

DockerPythonSQLCloud ComputingETLAirflowApache KafkaData engineeringPandasCommunication SkillsAnalytical SkillsCollaborationProblem SolvingRESTful APIsTerraformMicroservicesData visualizationData modelingSoftware EngineeringData management

Posted 5 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” Software Development

🏒 Company: American College of EducationπŸ‘₯ 100-500Education

  • Minimum of 4 years of experience in cloud engineering or a related field.
  • Strong knowledge of cloud platforms (e.g., AWS, Azure, Google Cloud) and services.
  • Strong Knowledge of Microsoft Windows servers and services.
  • Proficiency in scripting and automation tools (e.g., Python, PowerShell, Terraform).
  • Excellent problem-solving and analytical skills.
  • Strong communication and interpersonal skills.
  • Relevant certifications (e.g., AWS Certified Solutions Architect, Microsoft Certified: Azure Solutions Architect) are highly desirable.
  • Designs, deploys, and manages public cloud infrastructure and services (e.g., AWS, Azure, Google Cloud) and private cloud infrastructure and services.
  • Monitors and optimizes cloud performance, scalability, and cost-efficiency.
  • Implements and manages security measures to protect cloud environments.
  • Automates cloud operations and workflows using scripting and infrastructure-as-code (IaC) tools.
  • Collaborates with development and operations teams to ensure seamless integration of cloud services.
  • Troubleshoots and resolves cloud-related issues and incidents.
  • Stays updated with the latest cloud technologies and best practices.
  • Documents cloud and system configurations, procedures, and changes.

AWSPythonSQLCloud ComputingGCPKubernetesMicrosoft Active DirectoryJiraActiveMQApache KafkaAzureREST APICommunication SkillsAnalytical SkillsCI/CDProblem SolvingLinuxDevOpsTerraformOrganizational skillsTime ManagementTroubleshootingJSONScriptingConfluence

Posted 5 days ago
Apply
Apply

πŸ“ Canada

🧭 Full-Time

πŸ” Software Development

🏒 Company: ProcurifyπŸ‘₯ 101-250πŸ’° $20,000,000 5 months agoCloud ComputingSaaSSupply Chain ManagementEnterprise SoftwareFinTechSoftwareProcurement

  • 6-7+ years in a Machine Learning or Data Scientist role, including 2+ years experience in LLMs.
  • Proven experience as the first ML engineer or a similar role, demonstrating a strong ability to build ML systems from the ground up.
  • Demonstrated experience building AI apps in production.
  • Proficiency in machine learning frameworks and libraries (e.g. Tensorflow, PyTorch, scikit-learn, Pandas).
  • Experience in building with LLMs such as GPT, Claude, Llama etc and strong understanding of LLM architectures and tools (Llamaindex, vector databases, Transformers, Langchain etc).
  • Experience with ETL/ELT tools, Data Lakehouse tech (Databricks, Python, Apache Spark, Hive, Parquet) and advanced SQL knowledge.
  • Strong programming skills in Python and familiarity with additional languages and tools commonly used in ML engineering.
  • Comfortable leading by example and using influence to drive collaboration, documentation, and knowledge sharing across teams and with a broad range of stakeholders.
  • Able to demonstrate initiative, work independently, and thrive with autonomy while collaborating across teams in a culture of priority setting and moving forward with urgency in alignment with our organizational strategy
  • Adept at focusing on multiple competing priorities, solving unique and complex technical problems, and persistently resolving blockers to progress.
  • Familiar with DevOps and MLOps principles such as design for manageability and root cause analysis
  • Familiar working within leading software development best practices such as scrum/kanban, CI/CD, and test automation
  • A strong driver to stay ahead of the curve with GenAI research and apply those insights to build real-world applications.
  • Develop and refine autonomous agents leveraging generative AI to automate and streamline user workflows, enhancing operational efficiency and user experience.
  • Design, create, evolve, and maintain scalable and efficient machine learning systems including, data pipelines, model training, deployment, and monitoring frameworks.
  • Integrate and leverage Large Language Models (LLMs) to develop advanced NLP features, including but not limited to chatbots, workflow automation agents and data analysis tools using state-of-the-art models (e.g. OpenAI, Anthropic, open source models).
  • Build complex, reusable architectures for services and systems using well-accepted design patterns to support iterative development and future scaling.
  • Develop and enhance systems to deliver personalized experiences to our users, utilizing advanced machine learning and AI technologies to derive engagement and satisfaction.
  • Partner across Product and Engineering teams on requirements to create product capabilities that fundamentally rely on AI and Machine Learning.
  • Collaborate with leadership to shape the vision for machine learning and AI at Procurify, providing valuable insights and guidance on technological strategies and opportunities.
  • Drive conversations within Engineering to improve and optimize the source data models, integration of the ML capabilities including those in our product platform.
  • Identify, design, and implement internal process improvements, including automation for data quality control and data validation, improved data delivery, and scalability.
  • Mentor other engineers, imparting best practices and institutionalizing efficient processes to foster growth and innovation within the team.

PythonSQLArtificial IntelligenceData AnalysisETLMachine LearningNumpyOpenCVPyTorchAlgorithmsApache KafkaData engineeringData StructuresREST APIPandasTensorflowCI/CDDevOpsData modeling

Posted 5 days ago
Apply
Apply

πŸ“ Europe

🧭 Fulltime

πŸ” Fintech

🏒 Company: UpvestπŸ‘₯ 1-10πŸ’° $667,439 almost 4 years agoReal Estate InvestmentFinancial ServicesOnline PortalsReal Estate

  • 5+ years in a product management role, preferably in technical or platform-oriented products within a data-intensive environment.
  • Strong understanding of data engineering concepts such as data pipelines, data warehousing, (reverse) ETL/ELT, data modelling, distributed event logs (Kafka), data processing libraries (Flink, Kafka Streams), and developer experience in all of these topics.
  • Proven ability to define a vision and strategy, break it into actionable steps, and deliver measurable outcomes.
  • Define and own the vision, roadmap, and strategy for the data platform, ensuring alignment with broader company goals and engineering objectives.
  • Collaborate with engineering teams to build foundational data infrastructure and tools that enable efficient development and deployment of data products.
  • Work closely with infrastructure, data engineering, analytics, and business teams to understand their needs and translate them into platform requirements.
  • Drive the development of platform capabilities, such as data ingestion and processing frameworks, APIs, self-service pipelines, data governance frameworks, and self-service tools.
  • Lead initiatives to make high-quality, discoverable, and secure data available across teams, promoting a culture of data ownership and autonomy.
  • Collaborate with engineering teams to ensure on-time, high-quality delivery of platform features, ensuring they meet technical and business requirements.
  • Advocate for the adoption and use of the data platform tools and services, ensuring alignment across the organization and addressing resistance points where necessary.

AWSSQLCloud ComputingData AnalysisETLGCPProduct ManagementSnowflakeProduct DevelopmentApache KafkaAzureData engineeringREST APIData visualizationStrategic thinkingData modeling

Posted 5 days ago
Apply
Apply

πŸ“ United States

πŸ’Έ 180000.0 - 222000.0 USD per year

πŸ” B2B enterprise software platforms, data infrastructure, data integration, distributed systems

🏒 Company: Redpanda DataπŸ‘₯ 101-250πŸ’° $100,000,000 Series C over 1 year agoDeveloper ToolsConsultingBig DataHardwareAnalyticsInformation TechnologySoftware

  • 2+ years of experience in direct role-related product management for B2B enterprise software platforms, particularly data infrastructure, data integration or other distributed systems.
  • 5+ years of experience overall experience in enterprise software companies
  • Experience working cross-functionally with technical and non-technical teams
  • Work with a 100% distributed / remote team
  • Comfortable switching context frequently
  • Experience working closely with highly technical products and customers
  • Excellent written and verbal communication skills, with the ability to concisely explain technical concepts to non-technical audiences (and vice-versa)
  • Meet with customers, analyze the competitive landscape, and identify market opportunities to shape the company's product strategy in your areas of responsibility
  • Define requirements for new features through PRDs and prioritize them appropriately in the product roadmap.
  • Communicate new features internally and externally and help articulate the value proposition, messaging, and pricing.
  • Collaborate with multiple internal teams (including Engineering, Marketing, Sales, and Customer Success) to ensure we ship the right features, at the right time, with the right user experience to match the expectations of our ICP and various user personas
  • Own delivery of features end-to-end, picking up tasks wherever needed, reviewing documentation, and creating the telemetry required to measure the success of feature launches after rollout

SQLCloud ComputingData AnalysisProduct ManagementUser Experience DesignCross-functional Team LeadershipProduct DevelopmentProduct AnalyticsApache KafkaAPI testingREST APICommunication SkillsAnalytical SkillsCollaborationCI/CDProblem SolvingMarket ResearchStrategic thinkingTechnical supportData modelingCustomer SuccessSaaS

Posted 5 days ago
Apply
Shown 10 out of 104