Remote Working

Remote working from home provides convenience and freedom, a lifestyle embraced by millions of people around the world. With our platform, finding the right job, whether full-time or part-time, becomes quick and easy thanks to AI, precise filters, and daily updates. Sign up now and start your online career today β€” fast and easy!

Remote IT Jobs
Cassandra
59 jobs found. to receive daily emails with new job openings that match your preferences.
59 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply
πŸ”₯ Site Reliability Engineer
Posted about 23 hours ago

πŸ“ United States of America

πŸ’Έ 63000.0 - 108675.0 USD per year

🏒 Company: vspvisioncareers

  • Bachelor’s Degree in Computer Science or related field and/or equivalent experience
  • 4+ years of related functional experience
  • Experience with both Windows and Linux, as well as containerization software products
  • Functional with continuous integration and continuous delivery
  • Experience with automation and orchestration using Chef, Puppet, Ansible and containers
  • Coding skills beyond simple scripts and knowledge of application architecture
  • Ability to program (structured and OO) with one or more high level languages, such as Python, Java, C/C++/C#, Ruby, and JavaScript
  • Understanding of distributed storage technologies like NFS, HDFS, Ceph, S3 as well as dynamic resource management frameworks (OpenShift, Kubernetes, Yarn)
  • Skilled in spotting problems and identifying performance bottlenecks, leading to problem and root cause analysis and risk mitigation
  • Capacity monitoring and performance planning experience with cloud solutions like AWS using applications such as Dynatrace, New Relic, App Dynamic
  • Use engineering design concepts to recommend design or test methods for attaining or improving operational reliability in support of business objectives.
  • Develop and implement high-reliability tools, systems, and services using engineering methodologies and tools.
  • Determine reliability requirements and deliver insights from massive scale data in real time.
  • Propose changes in design or formulation to improve system and/or process reliability.
  • Utilize best practices and work with cross-functional teams to provide solutions and a positive user experience.
  • Improve reliability, quality, and time-to-market for suite of software solutions, through effective hosting, monitoring, operations, and automation
  • Develop proprietary tools to improve system reliability and mitigate weaknesses in incident management or software delivery
  • Collaborate with team members to troubleshoot and fix issues utilizing knowledge of Β problems to route support escalation issues to the appropriate teams
  • Add automation for improved collaborative response in real-time, updates documentation, runbook tools, and modules to prepare teams for incidents
  • Support optimizing the software development life cycle to boost service reliability, based on post-incident reviews
  • Support system cost modeling for all hosted systems
  • Measure and optimize system performance, with an eye toward pushing our capabilities forward, getting ahead of customer needs, and innovating to continually improve
  • Deliver primary operational support and engineering for distributed software applications
  • Implement guidelines and plans for automated systems delivery maintaining system and data security
  • Assist with impact analysis regarding enterprise-wide technology
  • Perform capacity monitoring with various monitoring tools (Splunk, Dynatrace, etc.) and make recommendations
  • Gather and analyze metrics from both operating systems and applications to assist in performance tuning, fault finding, and corrective action planning
  • Support system integration, software, and hardware at enterprise level for optimum performance
  • Partner with development teams to improve services through rigorous automated testing and release procedures
  • Contribute to system architecture planning, and policies and procedures surrounding enterprise-wide technology
  • Participate in system design consulting, platform management, and capacity planning
  • Stay abreast of new technologies; introduce applicable technology in alignment with business goals and for creative solutions

AWSDockerPostgreSQLPythonSQLBashCloud ComputingData AnalysisDynamoDBElasticSearchGitJavaKafkaKubernetesMySQLOracleRabbitmqSoftware ArchitectureZabbixAlgorithmsCassandraData StructuresPrometheusRedisSparkCommunication SkillsAnalytical SkillsCI/CDProblem SolvingRESTful APIsLinuxDevOpsTerraformMicroservicesTeamworkTroubleshootingJSONCross-functional collaborationAnsibleScriptingDebugging

Posted about 23 hours ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 80000.0 - 110000.0 USD per year

πŸ” Communications, Media, and Entertainment

  • 4+ years of Professional experience with Java software development using Spring and REST-based architecture.
  • Experience or Knowledge with object-oriented development, data modeling, and design patterns.
  • Experience or Knowledge building systems for highly available multi-site environments with an understanding of the network architecture that supports such systems.
  • Professional experience with Java application servers and J2EE containers (Tomcat).
  • Knowledge of reactive coding patterns and frameworks (Reactor, Spring WebFlux, etc).
  • Fundamental understanding of data stores such as MongoDB, Cassandra, DynamoDB, Redis, Memcached, Oracle, Postgres.
  • Fundamental understanding of Agile methodology and software delivery via CI/CD.
  • Experience with infrastructure as code, build automation, observability, security principles, and technical architecture.
  • Fundamental understanding of testing methodologies and frameworks.
  • Understanding of the HTTP protocol and experience in caching, especially in HTTP-compliant caches.
  • Professional or Academic experience in developing with Major MVC frameworks (Spring MVC).
  • Strong technical written and verbal communication skills.
  • Design, build and scale sophisticated high-volume server-side applications and frameworks.
  • Gain an understanding of a complex microservices architecture to understand how new feature development or updates to existing codebase will affect the service as a whole.
  • Write reusable, testable, and maintainable code.
  • Collaborate with project stakeholders to identify product and technical requirements.
  • Conduct analysis to determine integration needs.
  • Write code that meets functional requirements and is testable and maintainable.
  • Have a passion for test driven development.
  • Design, create, and maintain observability telemetry collection and dashboards to understand service health.
  • Design, create, and maintain automation to perform processes such as builds, deployments, infrastructure as code, and operational automation.
  • Participate in production service support and issue resolution in a high-volume high-impact environment.
  • Work with Quality Assurance team to determine if applications fit specification and technical requirements.
  • Produce technical designs and documentation at varying levels of granularity.

AWSBackend DevelopmentDockerPostgreSQLSQLAgileDynamoDBJavaJava EEMongoDBSpringSpring BootSpring MVCCassandraREST APIRedisTomcatCI/CDRESTful APIsDevOpsMicroservices

Posted 3 days ago
Apply
Apply

πŸ“ Canada

🧭 Full-Time

πŸ” Cybersecurity

🏒 Company: JobgetherπŸ‘₯ 11-50πŸ’° $1,493,585 Seed about 2 years agoInternet

  • 3+ years of back-end development experience, with expertise in Node.js and backend frameworks like Nest.js or Express.js.
  • Experience in designing and maintaining microservices architectures and contributing to full-stack development.
  • Proficiency in database management, schema design, performance tuning, and indexing for large-scale distributed databases.
  • Experience with message-driven architectures, using tools like Kafka or RabbitMQ.
  • Familiarity with CI/CD pipelines (Jenkins, GitLab CI, CircleCI) and automation of deployment and scaling.
  • Proven experience in leading and mentoring engineering teams.
  • Expertise in cloud-native technologies (e.g., AWS Lambda) and monitoring tools (e.g., Prometheus, Grafana).
  • Familiarity with containerized microservices using Kubernetes.
  • Strong problem-solving and communication skills, with a passion for continuous learning.
  • B.S. degree in Computer Science or a related field, or equivalent work experience.
  • Design, develop, and maintain backend systems and microservices using Node.js, Kubernetes, and related technologies.
  • Lead projects across the stack, focusing on backend components and collaborating with front-end developers for full-stack solutions.
  • Manage and optimize distributed databases like PostgreSQL, MongoDB, or Cassandra, ensuring scalability and performance.
  • Build and maintain APIs (RESTful, gRPC, or GraphQL) and integrate third-party services, ensuring security, performance, and scalability.
  • Mentor and guide junior engineers, leading complex, multi-person projects to successful completion.
  • Collaborate effectively with cross-functional teams and leadership to align technical solutions with business goals.

AWSBackend DevelopmentDockerGraphQLLeadershipNode.jsPostgreSQLExpress.jsFull Stack DevelopmentKafkaKubernetesMongoDBRabbitmqAPI testingCassandraGrafanagRPCPrometheusNest.jsCI/CDRESTful APIsMentoringMicroservices

Posted 4 days ago
Apply
Apply

πŸ“ United States

πŸ” Software Development

🏒 Company: ge_externalsite

  • Exposure to industry standard data modeling tools (e.g., ERWin, ER Studio, etc.).
  • Exposure to Extract, Transform & Load (ETL) tools like Informatica or Talend
  • Exposure to industry standard data catalog, automated data discovery and data lineage tools (e.g., Alation, Collibra, TAMR etc., )
  • Hands-on experience in programming languages like Java, Python or Scala
  • Hands-on experience in writing SQL scripts for Oracle, MySQL, PostgreSQL or HiveQL
  • Experience with Big Data / Hadoop / Spark / Hive / NoSQL database engines (i.e. Cassandra or HBase)
  • Exposure to unstructured datasets and ability to handle XML, JSON file formats
  • Work independently as well as with a team to develop and support Ingestion jobs
  • Evaluate and understand various data sources (databases, APIs, flat files etc. to determine optimal ingestion strategies
  • Develop a comprehensive data ingestion architecture, including data pipelines, data transformation logic, and data quality checks, considering scalability and performance requirements.
  • Choose appropriate data ingestion tools and frameworks based on data volume, velocity, and complexity
  • Design and build data pipelines to extract, transform, and load data from source systems to target destinations, ensuring data integrity and consistency
  • Implement data quality checks and validation mechanisms throughout the ingestion process to identify and address data issues
  • Monitor and optimize data ingestion pipelines to ensure efficient data processing and timely delivery
  • Set up monitoring systems to track data ingestion performance, identify potential bottlenecks, and trigger alerts for issues
  • Work closely with data engineers, data analysts, and business stakeholders to understand data requirements and align ingestion strategies with business objectives.
  • Build technical data dictionaries and support business glossaries to analyze the datasets
  • Perform data profiling and data analysis for source systems, manually maintained data, machine generated data and target data repositories
  • Build both logical and physical data models for both Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) solutions
  • Develop and maintain data mapping specifications based on the results of data analysis and functional requirements
  • Perform a variety of data loads & data transformations using multiple tools and technologies.
  • Build automated Extract, Transform & Load (ETL) jobs based on data mapping specifications
  • Maintain metadata structures needed for building reusable Extract, Transform & Load (ETL) components.
  • Analyze reference datasets and familiarize with Master Data Management (MDM) tools.
  • Analyze the impact of downstream systems and products
  • Derive solutions and make recommendations from deep dive data analysis.
  • Design and build Data Quality (DQ) rules needed

AWSPostgreSQLPythonSQLApache AirflowApache HadoopData AnalysisData MiningErwinETLHadoop HDFSJavaKafkaMySQLOracleSnowflakeCassandraClickhouseData engineeringData StructuresREST APINosqlSparkJSONData visualizationData modelingData analyticsData management

Posted 5 days ago
Apply
Apply

πŸ“ Republic of Ireland

πŸ” Software Development

  • Good coding skills in Python or equivalent (ideally Java or C++).
  • Hands-on experience in open-ended and ambiguous data analysis (pattern and insight extraction through statistical analysis, data segmentation etc).
  • A craving to learn and use cutting edge AI technologies.
  • Understanding of building data pipelines to train and deploy machine learning models and/or ETL pipelines for metrics and analytics or product feature use cases.
  • Experience in building and deploying live software services in production.
  • Exposure to some of the following technologies (or equivalent): Apache Spark, AWS Redshift, AWS S3, Cassandra (and other NoSQL systems), AWS Athena, Apache Kafka, Apache Flink, AWS and service oriented architecture.
  • Define problems and gather requirements in collaboration with product managers, teammates and engineering managers.
  • Collect and curate datasets necessary to evaluate and feed the generative models.
  • Develop and validate results of the generative AI models.
  • Fine tune models when necessary.
  • Productionize models for offline and / or online usage.
  • Learn the fine art of balancing scale, latency and availability depending on the problem.

AWSPythonData AnalysisETLJavaMachine LearningC++Apache KafkaCassandraNosqlSoftware Engineering

Posted 6 days ago
Apply
Apply

πŸ“ United Kingdom

πŸ” Software Development

  • Good coding skills in Python or equivalent (ideally Java or C++).
  • Hands-on experience in open-ended and ambiguous data analysis (pattern and insight extraction through statistical analysis, data segmentation etc).
  • A craving to learn and use cutting edge AI technologies.
  • Understanding of building data pipelines to train and deploy machine learning models and/or ETL pipelines for metrics and analytics or product feature use cases.
  • Experience in building and deploying live software services in production.
  • Exposure to some of the following technologies (or equivalent): Apache Spark, AWS Redshift, AWS S3, Cassandra (and other NoSQL systems), AWS Athena, Apache Kafka, Apache Flink, AWS and service oriented architecture.
  • Define problems and gather requirements in collaboration with product managers, teammates and engineering managers.
  • Collect and curate datasets necessary to evaluate and feed the generative models.
  • Develop and validate results of the generative AI models.
  • Fine tune models when necessary.
  • Productionize models for offline and / or online usage.
  • Learn the fine art of balancing scale, latency and availability depending on the problem.

AWSBackend DevelopmentPythonSoftware DevelopmentCloud ComputingData AnalysisETLJavaMachine LearningC++Apache KafkaCassandraREST API

Posted 6 days ago
Apply
Apply

πŸ“ Spain

🧭 Contract

πŸ” IT Consultancy

🏒 Company: CoduranceπŸ‘₯ 11-50Information TechnologySoftware

  • Excellent level of English and Spanish (written and spoken)
  • Experience with database systems like SQL and NoSQL (MongoDB, Cassandra, CosmosDB, Redis, MSSQL, MySQL…)
  • Experience with Data warehousing solutions (Azure Synapse, AWS Redshift, etc.)
  • Experience with ETL tools (Data Factory, Databricks, Amazon EMR, Hadoop, Spark, etc.)
  • Experience with Machine learning (Tensorflow, PyTorch, scikit-learn, NumPy, Pandas, etc.)
  • Experience with Data APIs (Knowledge of OData, GraphQL, as well as frameworks like FastAPI, Flask etc.)
  • Programming skills (preferably Python, Java and Scala)
  • Understanding the basics of distributed systems
  • Knowledge of algorithms and data structures
  • Ideally a master's degree in data engineering
  • Create and maintain optimal data pipeline architectures and data models
  • Analyze and assemble large, complex data sets that meet business requirements
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and NoSQL databases, AWS, Azure or GCP technologies
  • Build analytics tools for both business and data scientists
  • Provide and generate actionable insights from data around business performance metrics and report on them
  • Work with internal and external stakeholders to assist with data-related technical issues and support data infrastructure needs
  • Help in the implementation of algorithms and prototypes
  • Enhance data quality and reliability
  • Collaborate with data scientists, architects and the rest of the business and engineering team to successfully deliver projects

AWSGraphQLPythonSQLCloud ComputingData AnalysisETLFlaskGCPJavaMachine LearningMongoDBMySQLNumpyPyTorchAlgorithmsCassandraData engineeringData StructuresFastAPIREST APIRedisNosqlPandasSparkTensorflowCommunication SkillsAnalytical SkillsCollaborationCI/CDProblem SolvingAgile methodologiesScalaData visualizationData modeling

Posted 7 days ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ’Έ 230000.0 - 322000.0 USD per year

πŸ” Software Development

  • 7+ years of contributing high-quality code to production systems that operate at scale.
  • 5+ years of experience building control systems, PID controllers, multi-armed bandits, reinforcement learning algorithms, or bid/pricing optimization systems.
  • Experience leading large engineering teams and collaborating with cross-functional partners is required.
  • Experience designing optimization algorithms in an ad serving platform and/or other marketplaces is preferred.
  • Experience with state of the art control systems, reinforcement learning algorithms is a strong plus.
  • Building Reddit-scale optimizations to improve advertiser outcomes using cutting-edge techniques in the industry.
  • Leverage live auction data and model predictions to adjust campaign bids in real time.
  • Incorporate knowledge of the Reddit ads marketplace into budget pacing algorithms powered by control & reinforcement learning systems
  • Lead the team on designing new bid & budget optimization products and algorithms as well as conducting rigorous A/B experiments to evaluate the business impact.
  • Actively participate and work with other leads to set the long term direction for the team, plan and oversee engineering designs and project execution.

AWSDockerLeadershipPostgreSQLPythonSQLCloud ComputingData AnalysisElasticSearchGCPJavaKubernetesMachine LearningPyTorchCross-functional Team LeadershipAlgorithmsCassandraData StructuresREST APIRedisTensorflowScalaData modelingA/B testing

Posted 8 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 177000.0 - 213000.0 USD per year

πŸ” FinTech

🏒 Company: Flex

  • A minimum of 6 years of industry experience in the data infrastructure/data engineering domain.
  • A minimum of 6 years of experience with Python and SQL.
  • A minimum of 3 years of industry experience using DBT.
  • A minimum of 3 years of industry experience using Snowflake and its basic features.
  • Familiarity with AWS services, with industry experience using Lambda, Step Functions, Glue, RDS, EKS, DMS, EMR, etc.
  • Industry experience with different big data platforms and tools such as Snowflake, Kafka, Hadoop, Hive, Spark, Cassandra, Airflow, etc.
  • Industry experience working with relational and NoSQL databases in a production environment.
  • Strong fundamentals in data structures, algorithms, and design patterns.
  • Design, implement, and maintain high-quality data infrastructure services, including but not limited to Data Lake, Kafka, Amazon Kinesis, and data access layers.
  • Develop robust and efficient DBT models and jobs to support analytics reporting and machine learning modeling.
  • Closely collaborating with the Analytics team for data modeling, reporting, and data ingestion.
  • Create scalable real-time streaming pipelines and offline ETL pipelines.
  • Design, implement, and manage a data warehouse that provides secure access to large datasets.
  • Continuously improve data operations by automating manual processes, optimizing data delivery, and redesigning infrastructure for greater scalability.
  • Create engineering documentation for design, runbooks, and best practices.

AWSPythonSQLBashDesign PatternsETLHadoopJavaKafkaSnowflakeAirflowAlgorithmsCassandraData engineeringData StructuresNosqlSparkCommunication SkillsCI/CDRESTful APIsTerraformWritten communicationDocumentationData modelingDebugging

Posted 8 days ago
Apply
Apply

πŸ“ United States

πŸ’Έ 124800.0 - 163800.0 USD per year

πŸ” Software Development

🏒 Company: SinchπŸ‘₯ 1001-5000πŸ’° $48,845,918 Post-IPO Debt 6 months agoMessagingSaaSTelecommunicationsMobileSoftware

  • Proficient in Go.
  • Experience working in a CI/CD environment.
  • Experience with noSQL (Mongo, Cassandra) databases.
  • Experience with highly scalable distributed systems capable of handling millions of requests.
  • Experience working with and maintaining RESTful APIs.
  • Passion for clean, simple, and well-tested code.
  • Makes design decisions and recommendations to satisfy business requirements based on the product roadmap/vision.
  • Makes recommendations based upon established vision and associated analysis in order to satisfy tactical, operational, and strategic needs.
  • Capable of prototyping solutions to determine feasibility and provide an outline to others for implementation.
  • Designs sustainable solutions and processes that minimize developer support and intervention.

Software DevelopmentMongoDBCassandraGoREST APINosqlCI/CDRESTful APIsMicroservices

Posted 11 days ago
Apply
Shown 10 out of 59

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why do Job Seekers Choose Our Platform for Remote Work Opportunities?

We’ve developed a well-thought-out service for home job matching, making the searching process easier and more efficient.

AI-powered Job Processing and Advanced Filters

Our algorithms process thousands of offers postings daily, extracting only the key information from each listing. This allows you to skip lengthy texts and focus only on the offers that match your requirements.

With powerful skill filters, you can specify your core competencies to instantly receive a selection of job opportunities that align with your experience. 

Search by Country of Residence

For those looking for fully remote jobs in their own country, our platform offers the ability to customize the search based on your location. This is especially useful if you want to adhere to local laws, consider time zones, or work with employers familiar with local specifics.

If necessary, you can also work remotely with employers from other countries without being limited by geographical boundaries.

Regular Data Update

Our platform features over 40,000 remote work offers with full-time or part-time positions from 7,000 companies. This wide range ensures you can find offers that suit your preferences, whether from startups or large corporations.

We regularly verify the validity of vacancy listings and automatically remove outdated or filled positions, ensuring that you only see active and relevant opportunities.

Job Alerts

Once you register, you can set up convenient notification methods, such as receiving tailored job listings directly to your email or via Telegram. This ensures you never miss out on a great opportunity.

Our job board allows you to apply for up to 5 vacancies per day absolutely for free. If you wish to apply for more, you can choose a suitable subscription plan with weekly, monthly, or annual payments.

Wide Range of Completely Remote Online Jobs

On our platform, you'll find fully remote work positions in the following fields:

  • IT and Programming β€” software development, website creation, mobile app development, system administration, testing, and support.
  • Design and Creative β€” graphic design, UX/UI design, video content creation, animation, 3D modeling, and illustrations.
  • Marketing and Sales β€” digital marketing, SMM, contextual advertising, SEO, product management, sales, and customer service.
  • Education and Online Tutoring β€” teaching foreign languages, school and university subjects, exam preparation, training, and coaching.
  • Content β€” creating written content for websites, blogs, and social media; translation, editing, and proofreading.
  • Administrative Roles (Assistants, Operators) β€” Virtual assistants, work organization support, calendar management, and document workflow assistance.
  • Finance and Accounting β€” bookkeeping, reporting, financial consulting, and taxes.

Other careers include: online consulting, market research, project management, and technical support.

All Types of Employment

The platform offers online remote jobs with different types of work:

  • Full-time β€” the ideal choice for those who value stability and predictability;
  • part-time β€” perfect for those looking for a side home job or seeking a balance between work and personal life;
  • Contract β€” suited for professionals who want to work on projects for a set period.
  • Temporary β€” short-term work that can be either full-time or part-time. These positions are often offered for seasonal or urgent tasks;
  • Internship β€” a form of on-the-job training that allows you to gain practical experience in your chosen field.

Whether you're looking for stable full-time employment, the flexibility of freelancing, or a part-time side gig, you'll find plenty of options on Remoote.app.

Remote Working Opportunities for All Expertise Levels

We feature offers for people with all levels of expertise:

  • for beginners β€” ideal positions for those just starting their journey in internet working from home;
  • for intermediate specialists β€” if you already have experience, you can explore positions requiring specific skills and knowledge in your field;
  • for experts β€” roles for highly skilled professionals ready to tackle complex tasks.

How to Start Your Online Job Search Through Our Platform?

To begin searching for home job opportunities, follow these three steps:

  1. Register and complete your profile. This process takes minimal time.
  2. Specify your skills, country of residence, and the preferable position.
  3. Receive notifications about new vacancy openings and apply to suitable ones.

If you don't have a resume yet, use our online builder. It will help you create a professional document, highlighting your key skills and achievements. The AI will automatically optimize it to match job requirements, increasing your chances of a successful response. You can update your profile information at any time: modify your skills, add new preferences, or upload an updated resume.