Apply

Analytics Engineer

Posted 30 days agoViewed

View full description

💎 Seniority level: Junior, 2+ years

📍 Location: United States, Canada

💸 Salary: 165000.0 - 175000.0 USD per year

🔍 Industry: Software Development

🏢 Company: Warp👥 51-100💰 $50,000,000 Series B almost 2 years agoInformation ServicesInformation TechnologySoftware

🗣️ Languages: English

⏳ Experience: 2+ years

🪄 Skills: PythonSQLData AnalysisETLGCPSnowflakeTableauData engineeringSparkCI/CDData visualizationData modeling

Requirements:
  • 2+ years of experience in an analytics-oriented data role
  • Expert fluency in SQL & dbt, on top of a modern warehouse like BigQuery, Redshift, or Snowflake
  • Experience with a data visualization tool such as Looker, Mode, Tableau (we use Metabase)
  • Writing performant queries
  • Optimizing cost and compute across our data stack
  • Building pipelines to move data in and out of our data warehouse
  • Familiar w/ the basics of an engineering workflow (command line, version control, CI/CD, writing tests, etc)
Responsibilities:
  • Building and visualizing business/product metrics across charts and dashboards
  • A/B experiment design and analysis
  • Adapting dbt models for new product features, business lines, etc
  • Forecasting growth
  • Defining core data models that serve as flexible sources of truth for downstream analysis and charting
  • Building data quality monitoring frameworks
  • Building tooling to enable self-serve analytics
  • Writing ETL to integrate new classes of data, and reverse ETL to share warehouse sources of truth with downstream GTM tools
  • One-off data research projects to deepen our understanding of user behavior and to uncover levers for growth in the product (how does Warp grow within companies? What does the path to upgrade look like? How to optimize pricing? How to flag enterprise leads? What do power users do differently?)
Apply

Related Jobs

Apply

📍 Canada

🧭 Full-Time

🔍 FinTech

🏢 Company: KOHO

  • 5+ years of mastery in data manipulation and analytics architecture
  • Advanced expertise in dbt (incremental modeling, materializations, snapshots, variables, macros, jinja)
  • Strong knowledge of SQL and how to write efficient SQL queries
  • Strong command of SQL, query optimization, and data warehouse design
  • Building strong relationships with stakeholders (the finance team), scope and prioritize their analytics requests.
  • Understanding business needs and translating them to requirements.
  • Using dbt (Core for development and Cloud for orchestration) to transform, test, deploy, and document financial data while applying software engineering best practices.
  • Troubleshooting variances in reports, and striving to eliminate them at the source.
  • Building game-changing data products that empower the finance team
  • Architecting solutions that transform complex financial data into actionable insights
  • Monitoring, optimizing and troubleshooting warehouse performance (AWS Redshift).
  • Creating scalable, self-service analytics solutions that democratize data access
  • Occasionally building dashboards and reports in Sigma and Drivetrain.
  • Defining processes, building tools, and offering training to empower all data users in the organization.

SQLData engineeringCommunication SkillsAnalytical SkillsData visualizationData modelingFinanceData analytics

Posted 1 day ago
Apply
Apply
🔥 Analytics Engineer
Posted 6 days ago

📍 United States

🧭 Full-Time

💸 115000.0 - 129000.0 USD per year

🔍 Education

🏢 Company: amplify_careers

  • BS in Computer Science, Data Science, or equivalent experience.
  • 3+ years of professional software development or data engineering experience
  • Strong computer, data, and analytics engineering fundamentals.
  • Proven fluency in SQL and its use in code-based ETL frameworks preferably dbt
  • Understanding of ETL/ELT pipelines, analytical data modeling, aggregations, and metrics
  • Strong understanding of analytical modeling architectures, including the Kimball dimensional data model design
  • Ability to clearly communicate and present technical concepts to a broad audience both verbally and in written form
  • Build well-tested and documented ELT data pipelines for both full and incremental dbt models to funnel into a fact and dimensional data mart.
  • Work closely with sales on logistics pipeline forecasting and sales pipeline tracking to help focus our sales teams in the right areas for the biggest impact.
  • Align with finance on making sure we have well audited data inline with established financial best practices.
  • Engineer novel datasets which express a student's progress and performance through an adaptive learning experience which allows for flexible comparison across students and deep analysis of individual students.
  • Work with the data science team to measure the impact of design changes on an administrator reporting application.
  • Contribute to leading industry data standards, such as Caliper Analytics, EdFi, or xAPI
  • Craft slowly changing dimensional models that take into account the nuances of K-12 education such as School Year changes and students moving schools or classes.

AWSPostgreSQLPythonSQLBusiness IntelligenceData AnalysisETLSnowflakeTableauAirflowData engineeringAnalytical SkillsCI/CDData visualizationData modelingData analytics

Posted 6 days ago
Apply
Apply
🔥 Analytics Engineer
Posted 8 days ago

📍 AZ, CA, CO, FL, GA, ID, IL, KY, MD, MI, NJ, NM, NY, NC, OH, OR, PA, SC, TN, TX, UT, VA, WA

🔍 Healthcare

  • Experience as a Data Engineer or experienced data analyst
  • Passionate about establishing a data driven company
  • Comfortable filling multiple roles in a growing startup
Design, develop, and implement automation, monitoring, and maintaining high availability of production and non-production work environments.

AWSPostgreSQLPythonSQLApache AirflowCloud ComputingData AnalysisETLSnowflakeData engineeringREST APICommunication SkillsAnalytical SkillsCI/CDProblem SolvingTeamworkData visualizationData modelingData analyticsData management

Posted 8 days ago
Apply
Apply

📍 Canada

🔍 Software Development

🏢 Company: Lightspeed Commerce👥 1001-5000💰 $716,100,000 Post-IPO Equity over 3 years ago🫂 Last layoff 4 months agoE-CommerceBusiness Information SystemsRetail TechnologyCloud Management

  • Proficient with SQL and Python (or equivalent language)
  • Experience with MPP data warehouse (we use BigQuery)
  • Proficiency in performing data profiling, data mapping and data quality validation
  • Experience with data orchestrator
  • Solid understanding of industry standards for data warehousing and data modeling
  • Proficiency with data environments (we use GCP) as well as containerization solutions
  • Proficiency in source control, CI/CD and automated testing
  • A security mindset with ability to work with compliance protocols (SOX, GDPR)
  • Excellent communication skills and ability to communicate with technical and non technical stakeholders
  • Excellent problem solving skills
  • Ability to work independently and in teams
  • Self motivated and great at taking initiative
  • Cleanse and transform raw data into business logic to help stakeholders build reports and data models
  • Ensure the right data gets to the right people at the right time
  • Collaborate with a Central Data Team to build data pipelines in accordance with shared best practices and design, and feed proper data models for usage
  • Act as a champion for data quality and usability across the Golf organization
  • Ensure availability, quality and searchability of data and documentation
  • Write, clean and maintainable queries and scripts that are highly performing, well structured and easy to understand and extend
  • Automate testing of data models and add appropriate validations to ensure quality
  • Help define and improve our internal standards for data models and infrastructure
  • Help define the data vision and roadmap for a new and fast-growing product
  • Mentor and help grow the team

PythonSQLGCPCommunication SkillsCI/CDProblem SolvingComplianceData modelingSaaS

Posted 9 days ago
Apply
Apply

📍 USA

🧭 Full-Time

🔍 Software Development

🏢 Company: PermitFlow👥 1-10💰 $5,500,000 Seed almost 2 years agoConstructionAppsSoftware

  • 4+ years of experience in analytics engineering, data engineering, or data analytics roles.
  • Strong proficiency in PostgreSQL, SQL, dbt, or similar data warehousing technologies.
  • Experience with ETL pipelines and tools, or other modern data stack tools.
  • Advanced programming skills in Python or other languages for data transformation and analysis.
  • Proven experience designing data models and building scalable data warehouses for analytics purposes.
  • A deep understanding of data governance, data quality practices, and developing self-serve analytics solutions.
  • Strong communication and collaboration skills, with the ability to explain technical concepts to non-technical stakeholders.
  • Experience with business intelligence tools like Looker, Tableau, or Power BI.
  • Familiarity with cloud platforms like Google Cloud.
  • Design and implement scalable data models optimized for analytics and company-wide reporting, continuously refining them to meet evolving business needs.
  • Build and maintain efficient data pipelines to transform datasets for analytics.
  • Collaborate with product and engineering teams to integrate data from sources like PermitFlow’s CRM, 3rd party vendors, and other internal sources while optimizing for performance and reliability.
  • Implement data governance to ensure consistency, quality, and security. Define key metrics, track data lineage, and enforce data quality checks.
  • Work closely with stakeholders to deliver analytics solutions, develop dashboards, and provide training on data tools and best practices.
  • Manage and optimize PermitFlow’s data stack to support scalable reporting and insights.

PostgreSQLPythonSQLApache AirflowBusiness IntelligenceCloud ComputingData AnalysisETLGCPData engineeringRESTful APIsJSONData visualizationData modelingData analytics

Posted 11 days ago
Apply
Apply

📍 US

🧭 Full-Time

🔍 Security

🏢 Company: Crogl👥 11-50💰 $25,000,000 Series A 26 days agoComputerNetwork SecuritySoftware

  • Experience with databases, datalakes and log tools (Postgres, Databricks, Splunk)
  • Experience with compound AI components (LLMs, Vector DBs, containers)
  • Deep understanding of security operations use cases and workflows
  • Proficiency in Python and familiarity with cloud services (AWS, Azure, GCP)
  • Strong communication skills to translate complex technical concepts to both technical and non-technical audiences
  • Serve as primary technical support contact for Crogl customers, diagnosing and resolving issues with deployments
  • Create and maintain knowledge base articles and technical documentation
  • Develop and implement security operations use cases for customers, delivering compelling demonstrations of Crogl's capabilities
  • Develop prototypes and extensions based on threat advisories, best practices or customer driven use cases
  • Collaborate with engineering to address escalations, create new features and extend use cases

AWSPostgreSQLPythonSQLCloud ComputingCybersecurityData AnalysisMachine LearningREST APICommunication SkillsAnalytical SkillsTroubleshootingData modelingCustomer support

Posted 14 days ago
Apply
Apply

📍 United States

💸 157300.0 - 255200.0 USD per year

🔍 SaaS

🏢 Company: Calendly👥 501-1000💰 $350,000,000 Series B about 4 years ago🫂 Last layoff over 1 year agoProductivity ToolsEnterprise SoftwareCollaborationMeeting SoftwareSchedulingSoftware

  • 5+ years of experience working with a SaaS company, partnering with the GTM domain, and building data products in a cloud data warehouse (BigQuery, Snowflake, Redshift, Databricks, etc.)
  • Expert SQL skills with strong attention to data accuracy and integrity (R or Python is a plus)
  • Experience owning a dbt project with engineering best practices (e.g., unit testing, data quality checks, reusable building blocks, etc.)
  • Hands-on experience with Segment for web event tracking and customer data integration
  • Experience with version control systems like Github, Bitbucket, Gitlab, etc (bonus for CI/CD pipeline management)
  • Proficiency in data modeling to meet business and GTM needs for today, and beyond
  • Ability to diagnose data issues proactively and build trust in data across GTM teams
  • Experience with website experimentation (A/B testing, personalization, CRO analysis)
  • Experience with Salesforce, Braze, and paid ad platforms, ensuring seamless GTM data flow
  • Experience implementing and driving adoption of data quality tools, like Monte Carlo, or similar anomaly detection tool
  • Strong growth mindset—you understand how data drives revenue, acquisition, and retention
  • Design scalable queries and data syncs to provide customer traits, event data, and insights for GTM teams
  • Build customer journey models to drive lead prioritization, retention strategies, and precision targeting
  • Support web, CX, and content teams in implementing and analyzing experiments, personalization, and conversion optimizations
  • Drive adoption of Monte Carlo for GTM data monitoring and issue resolution, ensuring high trust in insights
  • Ensure timely and reliable data availability across Salesforce, Braze, Segment, and paid media platforms
  • Establish data governance and analytics engineering (AE) best practices, aligning with the core data platform and Analytics teams
  • Collaborate closely with Marketing, Sales, CX, Ops, and Analytics teams to ensure data is leveraged effectively for business decisions

PythonSQLCloud ComputingETLGitSalesforceSnowflakeAPI testingData engineeringCommunication SkillsAnalytical SkillsCI/CDProblem SolvingRESTful APIsAttention to detailCross-functional collaborationData visualizationStrategic thinkingData modelingData analyticsData managementSaaSA/B testing

Posted 18 days ago
Apply
Apply

📍 United States, United Kingdom

🧭 Full-Time

💸 175000.0 - 191300.0 USD per year

🔍 Software Development

🏢 Company: Kickstarter PBC

  • 8+ years of experience in data engineering, analytics engineering, or related fields.
  • Strong experience with cloud-based data warehouses (Redshift, Snowflake, or BigQuery) and query performance optimization.
  • Expertise in SQL, Python, and data transformation frameworks like dbt.
  • Experience building scalable data pipelines with modern orchestration tools (Airflow, MWAA, Dagster, etc.).
  • Knowledge of real-time streaming architectures (Kafka, Kinesis, etc.) and event-based telemetry best practices.
  • Experience working with business intelligence tools (e.g. Looker) and enabling self-serve analytics.
  • Ability to drive cost-efficient and scalable data solutions, balancing performance with resource management.
  • Familiarity with machine learning operations (MLOps) and experimentation tooling is a plus.
  • Strong problem-solving and communication skills—comfortable working cross-functionally with technical and non-technical stakeholders.
  • Develop, own and improve Kickstarter’s data architecture—optimize our Redshift warehouse, implement best practices for data storage, processing, and orchestration.
  • Design and build scalable ETL/ELT pipelines to transform raw data into clean, usable datasets for analytics, product insights, and machine learning applications.
  • Enhance data accessibility and self-service analytics by improving Looker models and enabling better organizational data literacy.
  • Support real-time data needs by optimizing event-based telemetry and integrating new data streams to fuel new products, personalization, recommendations, and fraud detection.
  • Lead cost optimization efforts—identify and implement more efficient processes and tools to lower costs.
  • Drive data governance and security best practices—ensure data integrity, access controls, and proper lineage tracking.
  • Collaborate across teams to ensure data solutions align with product, growth, and business intelligence needs.

PythonSQLETLKafkaSnowflakeAirflowData engineeringData visualizationData modeling

Posted 21 days ago
Apply
Apply

📍 United States

💸 130000.0 - 150000.0 USD per year

🔍 Data Analytics

🏢 Company: Velir

  • Expert in SQL
  • Expert with data transformation tools (e.g. dbt)
  • Expert with at least one cloud data warehouse (e.g. Snowflake)
  • Expert with version control and git
  • Proficiency with data modeling approaches and philosophies (e.g. Kimball, OBT)
  • Proficiency with business intelligence platforms (e.g. Sigma, PowerBI)
  • Knowledge of other common programming languages for data manipulation (e.g. Python, R)
  • Knowledge of common data integration patterns (e.g. CDC, ELT, etc.)
  • Knowledge of common data integration / orchestration platforms (e.g. Fivetran, Azure Data Factory, Apache Airflow)
  • Designs data models, implements data governance, selects and implements transformation and analytics tools, and creates efficient data querying and processing methods.
  • Configures and optimizes aspects of a cloud data warehouse such as data permissions, compute and storage clusters, and table schemas.
  • Works with tools like business intelligence (BI) platforms and data visualization tools.
  • Focuses on opportunities to reduce complexity within client data stacks and delivering added value for downstream use cases.
  • Provides constructive feedback on peer code or solution design document reviews.
  • Serves as a mentor for less advanced team members and onboarding new engineers onto the team, helping evolve and document the standards for the analytics engineering practice at Velir.
  • Promotes a positive culture within and across different teams, collaborating with data engineers and data analysts on end-to-end client requirements.
  • Collaborates with clients and functional managers to plan for analytics engineering needs for a product or feature launch.
  • Pairs with a teammate or with someone at a client on strategies for solving an analytics engineering problem.
  • Creates a process or reporting template that helps cross-functional teams solve for common analytics engineering problems.
  • Regularly engages with other teams to make our organization more effective.
  • Takes initiative to identify and solve important problems.
  • Coordinates with others on cross-cutting technical issues.
  • Drives data solutions improvements that impact the client experience or empowers internal stakeholders (teams like Operations, Customer Support, Finance, etc.) to do their job effectively.
  • Optimizes for the predictability and regular cadence of deliverables.
  • Keeps reliability, maintainability and scalability of our clients’ systems top of mind.
  • Embraces long-term ownership of projects while training others to reduce the bus factor or becoming a blocker.
  • Prioritizes and values undesirable/unowned work that enables the team to move faster.

PythonSQLApache AirflowBusiness IntelligenceCloud ComputingETLGitSnowflakeData engineeringREST APIData visualizationData modelingData analytics

Posted 24 days ago
Apply
Apply
🔥 Sr. Analytics Engineer
Posted 27 days ago

📍 United States

🧭 Full-Time

💸 135000.0 - 180000.0 USD per year

🔍 Software Development

🏢 Company: Smartsheet👥 1001-5000💰 $3,200,000,000 Post-IPO Debt 6 months ago🫂 Last layoff about 2 years agoSaaSEnterpriseSoftware

  • 10+ years of experience in advanced data modeling using Data Vault (DV 2.0), dbt, and Snowflake.
  • Proven experience in building analytics systems with data transformation, dependency, and workload management (Data Vault, AutomateDV, dbt core, Airflow, Snowflake, GitLab, SQL).
  • Proven experience with building and maintaining business analytics layers and semantic layers to enable self-service for non-technical end-users.
  • Strong production experience supporting and maintaining analytics/data warehouse solutions with mission-critical datasets.
  • Familiarity with data governance principles for maintaining datasets to enable a single source of truth.
  • Familiarity with self-serve product analytics tools (Amplitude, Thoughtspot, Tableau, Gainsight, etc.).
  • Knowledge of code management and CI/CD practices (GitLab, Airflow, Snowflake, Terraform).
  • Familiarity with AWS solutions such as S3, EC2, VPC.
  • Leading the implementation of efficient and scalable systems that optimize data transformations from our data lake in Snowflake into clean, documented data products.
  • Contribute to the design standards and architectural principle standards, designs and processes.
  • Build and maintain a governed, robust, reliable, and scalable analytics system (DW/DV).
  • Collaborating with BI analysts and data scientists to provide proper data accessibility guidelines.
  • Foster data democratization via activation of data on critical platforms like Smartsheet platform, Amplitude, Thoughtspot , Gainsight.
  • Create and contribute to frameworks that improve data quality and observability to identify and resolve issues faster.
  • Create and implement data testing plans, encompassing data validation, data quality assessment, and regression testing, to identify anomalies, inconsistencies, or errors in produced datasets.
  • Mentor and guide teams and help drive high standards across the team by conducting design reviews and engineering ideation forums.
  • Collaborate with cross-functional teams to assess data privacy needs and define data masking strategies aligned with industry compliance standards (e.g., GDPR, HIPAA, CCPA).

AWSLeadershipPythonSQLApache AirflowData AnalysisData MiningETLSnowflakeCross-functional Team LeadershipAmplitude AnalyticsTableauAlgorithmsData engineeringData StructuresCommunication SkillsAnalytical SkillsCollaborationCI/CDProblem SolvingRESTful APIsMentoringOrganizational skillsComplianceExcellent communication skillsActive listeningRisk ManagementData visualizationStakeholder managementData modelingData analyticsData management

Posted 27 days ago
Apply