Apply

Software Engineer, Data

Posted 6 months agoViewed

View full description

๐Ÿ’Ž Seniority level: Middle, 3-5 years

๐Ÿ“ Location: United States

๐Ÿ’ธ Salary: 110000 - 140000 USD per year

๐Ÿ” Industry: Marketing and Data Intelligence

๐Ÿข Company: Wpromote๐Ÿ‘ฅ 501-1000๐Ÿ’ฐ Private over 2 years agoAdvertisingSEOMarketingSEM

โณ Experience: 3-5 years

๐Ÿช„ Skills: PostgreSQLPythonSoftware DevelopmentSQLBusiness IntelligenceETLSoftware ArchitectureStrategyAirflowData engineeringCollaborationCI/CD

Requirements:
  • Bachelorโ€™s degree in Computer Science, Information Technology, or equivalent practical experience.
  • 3-5 years of experience in software development, particularly with Python and SQL.
  • Experience building and automating ETL/ELT pipelines using Airflow or similar software.
  • Proficient in dbt models, back-end API development, and software architecture.
  • Experience with PostgreSQL and BigQuery, focusing on query optimization.
  • Understanding of business intelligence tools like Looker.
  • Familiarity with Google Cloud Platform, Kubernetes, and Infrastructure as Code with Terraform.
  • Knowledge of advanced data formats and integration techniques.
  • Understanding of agile methodologies and CI/CD processes.
Responsibilities:
  • Collaborate with senior engineers to design, implement, maintain, and unit test scalable data solutions.
  • Build dbt models and leverage test-driven development practices.
  • Monitor, troubleshoot, and optimize the performance of data pipelines.
  • Ensure data quality through rigorous testing and validation processes.
  • Stay updated with industry trends to support continuous improvement in data engineering practices.
Apply

Related Jobs

Apply

๐Ÿ“ Needham, MA, El Segundo, CA

๐Ÿ’ธ 150000.0 - 215000.0 USD per year

๐Ÿ” Travel

๐Ÿข Company: Tripadvisor๐Ÿ‘ฅ 1001-5000๐Ÿ’ฐ $300,000,000 Post-IPO Equity almost 4 years ago๐Ÿซ‚ Last layoff about 1 year agoInternetHospitalityInformation ServicesE-CommerceRestaurantsVacation RentalHotelTravelSocial Media

  • 10+ years of experience as a professional engineer.
  • Bachelor of Science in Computer Science, Engineering or equivalent.
  • Deep experience querying, ETLing, and analyzing large data sets in databases.
  • Prior experience running and executing large-scale initiatives successfully.
  • Prior partnerships with business leaders to achieve substantial goals.
  • Solid foundation in data structures, algorithms, and OO design.
  • Build and drive a technical roadmap for our SEO platform.
  • Design solutions to business problems by building new tools and/or processes.
  • Independently manage projects with a focus on improvement.
  • Ensure code quality through design and code review leadership.
  • Mentor other team members.
  • Share technical knowledge and solutions through tech talks and design reviews.

PythonSoftware DevelopmentSQLData AnalysisETLAlgorithmsData Structures

Posted 26 days ago
Apply
Apply

๐Ÿ“ North America, Europe

๐Ÿงญ Full-Time

๐Ÿ’ธ 140000.0 - 170000.0 CAD / USD per year

๐Ÿ” Blockchain technology

  • Proven extensive experience in software engineering and distributed systems.
  • Ability to write production-ready applications in Go.
  • Ability to reason about tradeoffs between different engineering approaches.
  • Familiar with data storage solutions (SQL/noSQL databases).
  • Familiar with containerized infrastructure (Docker, Kubernetes).
  • Results-oriented individual with a high EQ and attention to detail.
  • Architect own solutions factoring in maintenance, scalability, and security.
  • Familiar with team processes based on agile methodology.
  • Collaborative approach to sharing ideas and finding innovative solutions.
  • Improve overall productivity through technical leadership and mentorship.
  • Create high performance indexing software to ingest data from blockchains or other sources in Go.
  • Full responsibility for technical architecting and team processes.
  • Work closely with other business units to prioritize deliverables and set timelines.
  • Make technical decisions and explain them to team members for buy-in.
  • Process big data collections and design fast-read data storages.
  • Design and implement high availability APIs for large blockchain datasets.
  • Design next generation data pipelines.
  • Mentor junior team members.

DockerSQLBlockchainKubernetesData engineeringGoNosql

Posted about 1 month ago
Apply
Apply

๐Ÿ“ Canada, US, NOT STATED

๐Ÿงญ Full-Time

๐Ÿ’ธ 140000.0 - 170000.0 CAD per year

๐Ÿ” Blockchain technology, Web3

๐Ÿข Company: Figment๐Ÿ‘ฅ 11-50HospitalityTravel AccommodationsArt

  • Proven extensive experience in software engineering and distributed systems.
  • Ability to write production-ready applications in Go.
  • Ability to reason about tradeoffs between different engineering approaches.
  • Familiarity with data storage solutions such as SQL and NoSQL databases.
  • Experience with containerized infrastructure, including Docker and Kubernetes.
  • High attention to detail with a results-oriented approach.
  • Ability to architect solutions considering maintenance, scalability, and security.
  • Familiar with agile methodologies and team processes.
  • Collaborative approach for sharing ideas and finding innovative solutions.
  • Create high performance indexing software to ingest data from blockchains or other sources in Go.
  • Full responsibility for technical architecting and team processes.
  • Work closely with various business units to prioritize deliverables and set timelines.
  • Make technical decisions on different engineering approaches and achieve buy-in from team members.
  • Process big data collections and design fast-read data storages.
  • Design and implement highly available APIs for large blockchain datasets.
  • Design next generation data pipelines.
  • Serve as a mentor for junior team members.

DockerSQLBlockchainKubernetesGoNosql

Posted about 1 month ago
Apply
Apply

๐Ÿ“ United States

๐Ÿงญ Full-Time

๐Ÿ” Blockchain intelligence and financial technology

๐Ÿข Company: TRM Labs๐Ÿ‘ฅ 101-250๐Ÿ’ฐ $70,000,000 Series B about 2 years agoCryptocurrencyComplianceBlockchainBig Data

  • Bachelor's degree (or equivalent) in Computer Science or a related field.
  • 5+ years of hands-on experience in architecting scalable API development and distributed system architecture.
  • Exceptional programming skills in Python and proficiency in SQL or SparkSQL.
  • In-depth experience with data stores such as BigQuery and Postgres.
  • Proficiency in data pipeline tools like Airflow and DBT.
  • Expertise in data processing technologies including Dataflow, Spark, Kafka, and Flink.
  • Competence in deploying and monitoring infrastructure using tools like Docker, Terraform, Kubernetes, and Datadog.
  • Proven ability in loading, querying, and transforming extensive datasets.
  • Build highly scalable features integrating with multiple blockchains.
  • Design intricate data models for optimal storage and retrieval supporting sub-second latency for querying blockchain data.
  • Collaborate across departments with data scientists, backend engineers, and product managers to enhance TRMโ€™s products.

DockerPythonSQLApache AirflowKafkaKubernetesPostgresSparkTerraform

Posted about 2 months ago
Apply
Apply

๐Ÿ“ United States, India, United Kingdom

๐Ÿ” B2B Technology

๐Ÿข Company: Demandbase๐Ÿ‘ฅ 501-1000๐Ÿ’ฐ $175,000,000 Debt Financing almost 2 years agoSales AutomationAdvertisingBig DataSaaSAnalyticsB2BMarketingMarketing AutomationSoftware

  • Bachelorโ€™s or masterโ€™s degree in computer science, Mathematics, or Statistics from a top engineering institution.
  • 4+ years of data engineering experience in building enterprise data/analytics solutions.
  • Practical experience with complex analytics projects and advanced SQL for data analysis.
  • Strong practical experience in databases, Advanced SQL, and Python/R.
  • Good understanding of data strategies and data model design.
  • Design, model, and implement data analysis and analytics solutions.
  • Contribute hand-on to data projects involving high-level design, analysis, experiments, data architecture, and data modeling.
  • Support ETL pipeline modules through effective data transformation, data cleaning, reporting, and statistical analysis.
  • Apply analysis techniques such as segmentation, regression, clustering, and data profiling to analyze trends and report KPIs.
  • Collaborate with cross-functional teams in an Agile setting to build a scalable, high-availability data analytics platform.

PythonSQLAgileData AnalysisETLJavaJavascriptProduct DevelopmentData engineeringSparkCommunication SkillsProblem SolvingData modeling

Posted about 2 months ago
Apply
Apply

๐Ÿ“ US

๐Ÿงญ Full-Time

๐Ÿ’ธ 200000 - 255000 USD per year

๐Ÿ” Financial services, Blockchain

  • A Bachelor's degree (or equivalent) in Computer Science or a related field.
  • A proven track record with 8+ years of hands-on experience in architecting distributed system architecture.
  • Exceptional programming skills in Python and adeptness in SQL or SparkSQL.
  • In-depth experience with data stores such as Iceberg, Trino, BigQuery, and StarRocks.
  • Proficiency in data pipeline and workflow orchestration tools like Airflow, DBT.
  • Expertise in data processing technologies and streaming workflows including Spark, Kafka, and Flink.
  • Competence in deploying and monitoring infrastructure within public cloud platforms using Docker, Terraform, Kubernetes, and Datadog.
  • Proven ability in loading, querying, and transforming extensive datasets.
  • Build highly reliable data services to integrate with dozens of blockchains.
  • Develop complex ETL pipelines that transform and process petabytes of structured and unstructured data in real-time.
  • Design and architect intricate data models for optimal storage and retrieval to support sub-second latency for querying blockchain data.
  • Oversee the deployment and monitoring of large database clusters with a focus on performance and high availability.
  • Collaborate across departments to design and implement novel data models that enhance TRMโ€™s products.

DockerPythonSQLBlockchainETLKafkaKubernetesAirflowData engineeringSparkCollaborationTerraformDocumentation

Posted about 2 months ago
Apply
Apply

๐Ÿ“ US

๐Ÿ” Blockchain intelligence and financial services

  • Bachelor's degree (or equivalent) in Computer Science or a related field.
  • 5+ years of hands-on experience in architecting distributed system architecture.
  • Exceptional programming skills in Python.
  • Adeptness in SQL or SparkSQL.
  • In-depth experience with data stores such as ClickHouse, ElasticSearch, Postgres, Redis, and Neo4j.
  • Proficiency in data pipeline and workflow orchestration tools like Airflow, DBT, Luigi, Azkaban, and Storm.
  • Expertise in data processing technologies and streaming workflows including Spark, Kafka, and Flink.
  • Competence in deploying and monitoring infrastructure within public cloud platforms, utilizing tools like Docker, Terraform, Kubernetes, and Datadog.
  • Build highly reliable data services to integrate with various blockchains.
  • Develop complex ETL pipelines for processing structured and unstructured data in real-time.
  • Design intricate data models to support optimal storage and retrieve with sub-second latency.
  • Oversee large database cluster deployment and monitoring with a focus on performance.
  • Collaborate with data scientists, engineers, and product managers to enhance TRM's products.

DockerPythonSQLBlockchainElasticSearchETLKafkaKubernetesAirflowClickhouseData engineeringPostgresRedisSparkCollaborationTerraformDocumentation

Posted about 2 months ago
Apply
Apply

๐Ÿ“ United States

๐Ÿ” Life sciences

  • Applicants must have the unrestricted right to work in the United States.
  • Veeva will not provide sponsorship at this time.
  • Spearhead the development of new architecture for the Data platform from the ground up.
  • Design and build a resilient, scalable cloud-based platform along with its accompanying tools.
  • Empower Opendata teams to efficiently create and distribute valuable data assets.
  • Exercise end-to-end ownership for the project.

Backend DevelopmentLeadershipSoftware DevelopmentCross-functional Team LeadershipCommunication SkillsAnalytical SkillsCollaboration

Posted 3 months ago
Apply
Apply

๐Ÿ“ USA

๐Ÿงญ Full-Time

๐Ÿ’ธ 169000 - 240000 USD per year

๐Ÿ” Financial services

  • 5+ years of industry experience in building large scale production systems.
  • Experience building and owning large-scale stream processing systems.
  • Experience building and operating robust and highly available infrastructure.
  • Working knowledge of Relational and NoSQL databases.
  • Experience working with Data Warehouse solutions.
  • Experience with industry standard stream processing frameworks like Spark, Samza, Flink, Beam etc.
  • Experience leading technical projects and mentoring junior engineers.
  • Exceptionally collaborative with a history of delivering complex technical projects and working closely with stakeholders.
  • This position requires either equivalent practical experience or a Bachelorโ€™s degree in a related field.
  • Help support the Data Platform that forms the backbone for several thousands of offline workloads at Affirm.
  • Design and build data infrastructure systems, services, and tools to handle new Affirm products and business requirements that securely scale over millions of users and their transactions.
  • Build frameworks and services which will be used by other engineering teams at Affirm to manage billions of dollars in loans and power customer experiences.
  • Improve the reliability and efficiency of the Data Platform at scale and high reliability.
  • Engage other teams at Affirm about their use of the Data platform to ensure we are always building the right thing.

Backend DevelopmentLeadershipSoftware DevelopmentSQLData AnalysisElasticSearchKafkaCross-functional Team LeadershipApache KafkaSparkCollaboration

Posted 3 months ago
Apply
Apply

๐Ÿ“ United States

๐Ÿงญ Full-Time

๐Ÿ’ธ $240,000 - $270,000 per year

๐Ÿ” Blockchain intelligence data platform

  • Bachelor's degree (or equivalent) in Computer Science or a related field.
  • 5+ years of experience in building distributed system architecture, with a particular focus on incremental updates from inception to production.
  • Strong programming skills in Python and SQL.
  • Deep technical expertise in advanced data structures and algorithms for incremental updating of data stores (e.g., Graphs, Trees, Hash Maps).
  • Comprehensive knowledge across all facets of data engineering, including implementing and managing incremental updates in data stores like BigQuery, Snowflake, RedShift, Athena, Hive, and Postgres.
  • Orchestrating data pipelines and workflows focused on incremental processing using tools such as Airflow, DBT, Luigi, Azkaban, and Storm.
  • Developing and optimizing data processing technologies and streaming workflows for incremental updates (e.g., Spark, Kafka, Flink).
  • Deploying and monitoring scalable, incremental update systems in public cloud environments (e.g., Docker, Terraform, Kubernetes, Datadog).
  • Expertise in loading, querying, and transforming large datasets with a focus on efficiency and incremental growth.
  • Design and build our Cloud Data Warehouse with a focus on incremental updates to improve cost efficiency and scalability.
  • Research innovative methods to incrementally optimize data processing, storage, and retrieval to support efficient data analytics and insights.
  • Develop and maintain ETL pipelines that transform and incrementally process petabytes of structured and unstructured data to enable data-driven decision-making.
  • Collaborate with cross-functional teams to design and implement new data models and tools focused on accelerating innovation through incremental updates.
  • Continuously monitor and optimize the Data Platform's performance, focusing on enhancing cost efficiency, scalability, and reliability.

DockerPythonSQLETLKafkaKubernetesMachine LearningSnowflakeAirflowAlgorithmsData engineeringData scienceData StructuresPostgresSparkCollaborationTerraformData analytics

Posted 4 months ago
Apply