Cassandra Jobs

Find remote positions requiring Cassandra skills. Browse through opportunities where you can utilize your expertise and grow your career.

Cassandra
55 jobs found. to receive daily emails with new job openings that match your preferences.
55 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

πŸ“ Canada

🧭 Full-Time

πŸ” Cybersecurity

🏒 Company: JobgetherπŸ‘₯ 11-50πŸ’° $1,493,585 Seed about 2 years agoInternet

  • 3+ years of back-end development experience, with expertise in Node.js and backend frameworks like Nest.js or Express.js.
  • Experience in designing and maintaining microservices architectures and contributing to full-stack development.
  • Proficiency in database management, schema design, performance tuning, and indexing for large-scale distributed databases.
  • Experience with message-driven architectures, using tools like Kafka or RabbitMQ.
  • Familiarity with CI/CD pipelines (Jenkins, GitLab CI, CircleCI) and automation of deployment and scaling.
  • Proven experience in leading and mentoring engineering teams.
  • Expertise in cloud-native technologies (e.g., AWS Lambda) and monitoring tools (e.g., Prometheus, Grafana).
  • Familiarity with containerized microservices using Kubernetes.
  • Strong problem-solving and communication skills, with a passion for continuous learning.
  • B.S. degree in Computer Science or a related field, or equivalent work experience.
  • Design, develop, and maintain backend systems and microservices using Node.js, Kubernetes, and related technologies.
  • Lead projects across the stack, focusing on backend components and collaborating with front-end developers for full-stack solutions.
  • Manage and optimize distributed databases like PostgreSQL, MongoDB, or Cassandra, ensuring scalability and performance.
  • Build and maintain APIs (RESTful, gRPC, or GraphQL) and integrate third-party services, ensuring security, performance, and scalability.
  • Mentor and guide junior engineers, leading complex, multi-person projects to successful completion.
  • Collaborate effectively with cross-functional teams and leadership to align technical solutions with business goals.

AWSBackend DevelopmentDockerGraphQLLeadershipNode.jsPostgreSQLExpress.jsFull Stack DevelopmentKafkaKubernetesMongoDBRabbitmqAPI testingCassandraGrafanagRPCPrometheusNest.jsCI/CDRESTful APIsMentoringMicroservices

Posted 2 days ago
Apply
Apply

πŸ“ United States

πŸ” Software Development

🏒 Company: ge_externalsite

  • Exposure to industry standard data modeling tools (e.g., ERWin, ER Studio, etc.).
  • Exposure to Extract, Transform & Load (ETL) tools like Informatica or Talend
  • Exposure to industry standard data catalog, automated data discovery and data lineage tools (e.g., Alation, Collibra, TAMR etc., )
  • Hands-on experience in programming languages like Java, Python or Scala
  • Hands-on experience in writing SQL scripts for Oracle, MySQL, PostgreSQL or HiveQL
  • Experience with Big Data / Hadoop / Spark / Hive / NoSQL database engines (i.e. Cassandra or HBase)
  • Exposure to unstructured datasets and ability to handle XML, JSON file formats
  • Work independently as well as with a team to develop and support Ingestion jobs
  • Evaluate and understand various data sources (databases, APIs, flat files etc. to determine optimal ingestion strategies
  • Develop a comprehensive data ingestion architecture, including data pipelines, data transformation logic, and data quality checks, considering scalability and performance requirements.
  • Choose appropriate data ingestion tools and frameworks based on data volume, velocity, and complexity
  • Design and build data pipelines to extract, transform, and load data from source systems to target destinations, ensuring data integrity and consistency
  • Implement data quality checks and validation mechanisms throughout the ingestion process to identify and address data issues
  • Monitor and optimize data ingestion pipelines to ensure efficient data processing and timely delivery
  • Set up monitoring systems to track data ingestion performance, identify potential bottlenecks, and trigger alerts for issues
  • Work closely with data engineers, data analysts, and business stakeholders to understand data requirements and align ingestion strategies with business objectives.
  • Build technical data dictionaries and support business glossaries to analyze the datasets
  • Perform data profiling and data analysis for source systems, manually maintained data, machine generated data and target data repositories
  • Build both logical and physical data models for both Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) solutions
  • Develop and maintain data mapping specifications based on the results of data analysis and functional requirements
  • Perform a variety of data loads & data transformations using multiple tools and technologies.
  • Build automated Extract, Transform & Load (ETL) jobs based on data mapping specifications
  • Maintain metadata structures needed for building reusable Extract, Transform & Load (ETL) components.
  • Analyze reference datasets and familiarize with Master Data Management (MDM) tools.
  • Analyze the impact of downstream systems and products
  • Derive solutions and make recommendations from deep dive data analysis.
  • Design and build Data Quality (DQ) rules needed

AWSPostgreSQLPythonSQLApache AirflowApache HadoopData AnalysisData MiningErwinETLHadoop HDFSJavaKafkaMySQLOracleSnowflakeCassandraClickhouseData engineeringData StructuresREST APINosqlSparkJSONData visualizationData modelingData analyticsData management

Posted 3 days ago
Apply
Apply

πŸ“ Republic of Ireland

πŸ” Software Development

  • Good coding skills in Python or equivalent (ideally Java or C++).
  • Hands-on experience in open-ended and ambiguous data analysis (pattern and insight extraction through statistical analysis, data segmentation etc).
  • A craving to learn and use cutting edge AI technologies.
  • Understanding of building data pipelines to train and deploy machine learning models and/or ETL pipelines for metrics and analytics or product feature use cases.
  • Experience in building and deploying live software services in production.
  • Exposure to some of the following technologies (or equivalent): Apache Spark, AWS Redshift, AWS S3, Cassandra (and other NoSQL systems), AWS Athena, Apache Kafka, Apache Flink, AWS and service oriented architecture.
  • Define problems and gather requirements in collaboration with product managers, teammates and engineering managers.
  • Collect and curate datasets necessary to evaluate and feed the generative models.
  • Develop and validate results of the generative AI models.
  • Fine tune models when necessary.
  • Productionize models for offline and / or online usage.
  • Learn the fine art of balancing scale, latency and availability depending on the problem.

AWSPythonData AnalysisETLJavaMachine LearningC++Apache KafkaCassandraNosqlSoftware Engineering

Posted 4 days ago
Apply
Apply

πŸ“ United Kingdom

πŸ” Software Development

  • Good coding skills in Python or equivalent (ideally Java or C++).
  • Hands-on experience in open-ended and ambiguous data analysis (pattern and insight extraction through statistical analysis, data segmentation etc).
  • A craving to learn and use cutting edge AI technologies.
  • Understanding of building data pipelines to train and deploy machine learning models and/or ETL pipelines for metrics and analytics or product feature use cases.
  • Experience in building and deploying live software services in production.
  • Exposure to some of the following technologies (or equivalent): Apache Spark, AWS Redshift, AWS S3, Cassandra (and other NoSQL systems), AWS Athena, Apache Kafka, Apache Flink, AWS and service oriented architecture.
  • Define problems and gather requirements in collaboration with product managers, teammates and engineering managers.
  • Collect and curate datasets necessary to evaluate and feed the generative models.
  • Develop and validate results of the generative AI models.
  • Fine tune models when necessary.
  • Productionize models for offline and / or online usage.
  • Learn the fine art of balancing scale, latency and availability depending on the problem.

AWSBackend DevelopmentPythonSoftware DevelopmentCloud ComputingData AnalysisETLJavaMachine LearningC++Apache KafkaCassandraREST API

Posted 4 days ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ’Έ 230000.0 - 322000.0 USD per year

πŸ” Software Development

  • 7+ years of contributing high-quality code to production systems that operate at scale.
  • 5+ years of experience building control systems, PID controllers, multi-armed bandits, reinforcement learning algorithms, or bid/pricing optimization systems.
  • Experience leading large engineering teams and collaborating with cross-functional partners is required.
  • Experience designing optimization algorithms in an ad serving platform and/or other marketplaces is preferred.
  • Experience with state of the art control systems, reinforcement learning algorithms is a strong plus.
  • Building Reddit-scale optimizations to improve advertiser outcomes using cutting-edge techniques in the industry.
  • Leverage live auction data and model predictions to adjust campaign bids in real time.
  • Incorporate knowledge of the Reddit ads marketplace into budget pacing algorithms powered by control & reinforcement learning systems
  • Lead the team on designing new bid & budget optimization products and algorithms as well as conducting rigorous A/B experiments to evaluate the business impact.
  • Actively participate and work with other leads to set the long term direction for the team, plan and oversee engineering designs and project execution.

AWSDockerLeadershipPostgreSQLPythonSQLCloud ComputingData AnalysisElasticSearchGCPJavaKubernetesMachine LearningPyTorchCross-functional Team LeadershipAlgorithmsCassandraData StructuresREST APIRedisTensorflowScalaData modelingA/B testing

Posted 6 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 177000.0 - 213000.0 USD per year

πŸ” FinTech

🏒 Company: Flex

  • A minimum of 6 years of industry experience in the data infrastructure/data engineering domain.
  • A minimum of 6 years of experience with Python and SQL.
  • A minimum of 3 years of industry experience using DBT.
  • A minimum of 3 years of industry experience using Snowflake and its basic features.
  • Familiarity with AWS services, with industry experience using Lambda, Step Functions, Glue, RDS, EKS, DMS, EMR, etc.
  • Industry experience with different big data platforms and tools such as Snowflake, Kafka, Hadoop, Hive, Spark, Cassandra, Airflow, etc.
  • Industry experience working with relational and NoSQL databases in a production environment.
  • Strong fundamentals in data structures, algorithms, and design patterns.
  • Design, implement, and maintain high-quality data infrastructure services, including but not limited to Data Lake, Kafka, Amazon Kinesis, and data access layers.
  • Develop robust and efficient DBT models and jobs to support analytics reporting and machine learning modeling.
  • Closely collaborating with the Analytics team for data modeling, reporting, and data ingestion.
  • Create scalable real-time streaming pipelines and offline ETL pipelines.
  • Design, implement, and manage a data warehouse that provides secure access to large datasets.
  • Continuously improve data operations by automating manual processes, optimizing data delivery, and redesigning infrastructure for greater scalability.
  • Create engineering documentation for design, runbooks, and best practices.

AWSPythonSQLBashDesign PatternsETLHadoopJavaKafkaSnowflakeAirflowAlgorithmsCassandraData engineeringData StructuresNosqlSparkCommunication SkillsCI/CDRESTful APIsTerraformWritten communicationDocumentationData modelingDebugging

Posted 6 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 125600.0 - 185500.0 USD per year

πŸ” Software Development

🏒 Company: ClickHouseπŸ‘₯ 101-250πŸ’° Series B over 2 years agoDatabaseArtificial Intelligence (AI)Big DataAnalyticsSoftware

  • 6+ years of relevant software development industry experience building and operating scalable, fault-tolerant, distributed systems.
  • Experience with ClickHouse or relational (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra) databases.
  • Proficiency with Kubernetes tools (Helm, Kustomize, operators, Istio, serviceMesh)
  • Strong understanding of airgapped architectures, data isolation.
  • Experience with containerized deployments (Docker, Kubernetes, OpenShift) in government environments.
  • Experience with cloud platforms (AWS, Azure, GCP, AWS GovCloud, Azure Government, or on-prem equivalents).
  • Proficiency in programming/scripting languages (Python or Go) for automation and integration.
  • U.S. Citizenship required (per U.S. federal contract requirements).
  • You have excellent communication skills and the ability to work well within a team and across engineering teams.
  • You are a strong problem solver and have solid production debugging skills.
  • You are passionate about efficiency, availability, scalability and data governance.
  • You thrive in a fast paced environment, and see yourself as a partner with the business with the shared goal of moving the business forward.
  • You have a high level of responsibility, ownership, and accountability
  • Design and develop a highly available, scalable, and secure ClickHouse Cloud tailored for airgapped systems.
  • Work closely with existing Dataplane and Core teams to ensure software parity with existing cloud infrastructure.
  • Design and deploy ClickHouse Cloud on Kubernetes and containerized environments ensuring high availability, replication and backup.
  • Develop and maintain Helm Charts, operators and kubernetes manifests for database management,
  • Implement repeatable automation to build, scale and troubleshoot various infrastructure components which form a part of air-gapped for disconnected operations.
  • Optimize ClickHouse Cloud database performance and storage architecture for on-premise, hybrid, and government cloud deployments.
  • Integrate secure authentication, encryption, and access control mechanisms.
  • Develop and maintain technical documentation for system architecture, security, and compliance audits.
  • Troubleshoot and resolve database performance, security, and operational issues.
  • Automate deployments and lifecycle management using Terraform, Ansible, or CI/CD pipelines.

AWSDockerPostgreSQLPythonSoftware DevelopmentSQLBashCloud ComputingGCPKubernetesMongoDBMySQLSoftware ArchitectureAlgorithmsAzureCassandraClickhouseData StructuresGoRDBMSREST APINosqlCI/CDLinuxDevOpsTerraformMicroservicesComplianceExcellent communication skillsJSONAnsibleScriptingDebugging

Posted 12 days ago
Apply
Apply

πŸ“ Canada

πŸ’Έ 150000.0 - 225000.0 CAD per year

πŸ” Cybersecurity

🏒 Company: crowdstrikecareers

  • Degree in Computer Science (or commensurate experience in data structures/algorithms/distributed systems).
  • The ability to scale backend systems – sharding, partitioning, scaling horizontally are second nature to you.
  • The desire to ship code and the love of seeing your bits run in production.
  • Deep understanding of distributed systems and scalability challenges.
  • Deep understanding of multi-threading, concurrency, and parallel processing technologies.
  • Team player skills – we embrace collaborating as a team as much as possible.
  • A thorough understanding of engineering best practices from appropriate testing paradigms to effective peer code reviews and resilient architecture.
  • The ability to thrive in a fast paced, test-driven, collaborative and iterative programming environment.
  • The skills to meet your commitments on time and produce high quality software that is unit tested, code reviewed, and checked in regularly for continuous integration.
  • Lead backend engineering efforts from rapid prototypes to large-scale applications across CrowdStrike products.
  • Leverage and build cloud based systems to detect targeted attacks and automate cyber threat intelligence production at a global scale.
  • Brainstorm, define, and build collaboratively with members across multiple teams.
  • Obsess about learning, and champion the newest technologies & tricks with others, raising the technical IQ of the team.
  • Be mentored and mentor other developers on web, backend and data storage technologies and our system.
  • Constantly re-evaluate our product to improve architecture, knowledge models, user experience, performance and stability.
  • Be an energetic β€˜self-starter’ with the ability to take ownership and be accountable for deliverables.
  • Use and give back to the open source community.

AWSBackend DevelopmentGraphQLPythonSoftware DevelopmentCloud ComputingCybersecurityElasticSearchGitHadoopKafkaMySQLAlgorithmsCassandraData StructuresGoRedisCommunication SkillsCI/CDProblem SolvingRESTful APIsMicroservicesTeamwork

Posted 13 days ago
Apply
Apply
πŸ”₯ Rust Engineer
Posted 19 days ago

πŸ“ United States

πŸ” Software Development

🏒 Company: MNTNπŸ‘₯ 251-500πŸ’° $2,000,000 Seed about 2 years agoAdvertisingReal TimeMarketingSoftware

  • 2-3+ years of Rust or C/C++ development experience
  • 3+ years of Java, Kotlin, or Scala development experience
  • Experience writing SQL queries and designing database tables
  • Python experience (preferred)
  • Knowledge of modern design patterns
  • Experience with Microservice style architecture
  • Experience using Cloud hosting solutions (K8, Istio, etc.)
  • Experience using GIT
  • Knowledge of how to write effective unit & functional test cases using test frameworks such as Junit
  • Comfortable in a Linux/UNIX environment
  • Experience on AWS, GCP, or other cloud infrastructure
  • Knowledge of NoSQL databases such as Cassandra (preferred)
  • Design and build a robust marketing platform that reaches the right audience, anywhere, anytime
  • Build high volume services that are reliable at scale
  • Develop big data solutions using open source frameworks
  • Collaborate with and explain complex technical issues to Product and Project Leads
  • Optimize and enhance existing products

AWSBackend DevelopmentSoftware DevelopmentSQLCloud ComputingDesign PatternsGCPGitJavaJUNITKotlinKubernetesC++AlgorithmsCassandraREST APINosqlRustLinuxMicroservicesScala

Posted 19 days ago
Apply
Apply

πŸ“ Argentina, Uruguay, Colombia, Mexico, Dominican Republic

🧭 Part-Time

🏒 Company: Halo MediaπŸ‘₯ 11-50InternetConsultingWeb DevelopmentAppsMarketingMobileWeb DesignSoftware

  • Bachelor's degree in Computer Science, Information Technology, or related field.
  • +5 years of experience as a Database Administrator or similar role.
  • Strong knowledge of SQL databases (MySQL, PostgreSQL, Microsoft SQL Server, etc.) and NoSQL databases (MongoDB, Cassandra, etc.).
  • Experience with database performance tuning, optimization, and indexing.
  • Familiarity with cloud-based database solutions (AWS RDS, Azure SQL, Google Cloud SQL, etc.).
  • Understanding of data security best practices and compliance standards.
  • Proficiency in scripting languages such as Python, Bash, or PowerShell.
  • Experience with database monitoring and management tools.
  • Strong analytical and problem-solving skills.
  • Manage, maintain, and optimize database systems.
  • Ensure the security, integrity, and availability of our data.
  • Optimize database performance and efficiency.

PostgreSQLPythonSQLAmazon RDSBashCloud ComputingMicrosoft SQL ServerMongoDBMySQLCassandraRDBMSNosqlScripting

Posted 20 days ago
Apply
Shown 10 out of 55