Apply

Data Engineer II

Posted about 7 hours agoViewed

View full description

📍 Location: US

💸 Salary: 100000.0 - 120000.0 USD per year

🔍 Industry: Education Technology

🏢 Company: Blueprint Test Prep

🪄 Skills: AWSPythonSQLCloud ComputingData AnalysisDynamoDBETLTableauAlgorithmsData engineeringAnalytical SkillsCI/CDProblem SolvingAttention to detailData visualizationData modelingData analytics

Requirements:
  • Experience in identifying, designing and implementing infrastructure for greater scalability, optimizing data delivery, and automating manual processes
  • Experience in backend databases and surrounding technologies such as Redshift, DynamoDB, Glue and S3
  • Experience in building BI models and visualizations such as Looker, Tableau or Power BI
  • Ability to create visualizations of complex data, experience with Looker preferred
  • Knowledge of modeling including proficiency in acquiring, organizing, and analyzing large amounts of data
  • Strong attention to detail and data accuracy, and the ability to think holistically
  • Some experience with the analysis of AI algorithms
Responsibilities:
  • Design innovative solutions that push the boundaries of the education technology space
  • Understand that quality data is the differentiator for our learners
  • Generate models and visualizations that explain stories and allow for data-driven solutions
  • Understand the KPIs that move the needle on the business side and that quality insights can be a huge differentiator for our learners
  • Resolve complex problems, break down complex data and propose creative solutions
  • Be a beacon of trust for everyone at Blueprint and provide analytical and logical solutions to problems
Apply

Related Jobs

Apply

📍 United States

🧭 Full-Time

💸 108000.0 - 162000.0 USD per year

🔍 Insurance

🏢 Company: Openly👥 251-500💰 $100,000,000 Series D over 1 year agoLife InsuranceProperty InsuranceInsuranceCommercial InsuranceAuto Insurance

  • 1 to 2 years of data engineering and data management experience.
  • Scripting skills in one or more of the following: Python.
  • Basic understanding and usage of a development and deployment lifecycle, automated code deployments (CI/CD), code repositories, and code management.
  • Experience with Google Cloud data store and data orchestration technologies and concepts.
  • Hands-on experience and understanding of the entire data pipeline architecture: Data replication tools, staging data, data transformation, data movement, and cloud based data platforms.
  • Understanding of a modern next generation data warehouse platform, such as the Lakehouse and multi-data layered warehouse.
  • Proficiency with SQL optimization and development.
  • Ability to understand data architecture and modeling as it relates to business goals and objectives.
  • Ability to gain an understanding of data requirements, translate them into source to target data mappings, and build a working solution.
  • Experience with terraform preferred but not required.
  • Design, create, and maintain data solutions. This includes data pipelines and data structures.
  • Work with data users, data science, and business intelligence personnel, to create data solutions to be used in various projects.
  • Translating concepts to code to enhance our data management frameworks and services to strive towards providing a high quality data product to our data users.
  • Collaborate with our product, operations, and technology teams to develop and deploy new solutions related to data architecture and data pipelines to enable a best-in-class product for our data users.
  • Collaborating with teammates to derive design and solution decisions related to architecture, operations, deployment techniques, technologies, policies, processes, etc.
  • Participate in domain, stand ups, weekly 1:1's, team collaborations, and biweekly retros
  • Assist in educating others on different aspects of data (e.g. data management best practices, data pipelining best practices, SQL tuning)
  • Build and share your knowledge within the data engineer team and with others in the company (e.g. tech all-hands, tech learning hour, domain meetings, code sync meetings, etc.)

DockerPostgreSQLPythonSQLApache AirflowCloud ComputingETLGCPKafkaKubernetesData engineeringGoREST APICI/CDTerraformData modelingScriptingData management

Posted 2 days ago
Apply
Apply

📍 USA, Canada

🧭 Full-Time

🔍 Data Analytics

🏢 Company: Wrapbook

  • Hands-on experience deploying production-quality code in fast-paced environments
  • Proficiency in Python (preferred), Java, or Scala for data processing and pipeline development
  • Ability to thrive in fast-changing, ambiguous situations, balancing immediate needs with long-term goals
  • Experience with data pipeline tools, such as Airbyte for ingestion and dbt for transformation/modeling
  • Hands-on expertise with container orchestration tools, such as Kubernetes, and cloud-native environments (e.g., AWS)
  • Proficiency with workflow automation and orchestration tools, like Dagster or Apache Airflow
  • Deep familiarity with PostgreSQL, including administration, tuning, and provisioning in cloud platforms (e.g., AWS)
  • Strong experience in ETL/ELT pipelines and data modeling, including raw vs. curated datasets, star schemas, and incremental loads
  • Advanced SQL skills, with expertise in relational databases and data warehouses (especially Snowflake)
  • Knowledge of best practices in data governance and security
  • Excellent problem-solving skills and ability to troubleshoot complex issues
  • Strong communication skills to collaborate with cross-functional teams
  • Own and optimize data pipeline infrastructure to ensure reliable, efficient, and scalable data flows from diverse sources.
  • Contribute to the development of the data engineering roadmap in collaboration with Platform leadership and cross-functional stakeholders.
  • Design, build, and maintain scalable ETL/ELT pipelines to transform raw data into curated datasets within AWS S3 and Snowflake.
  • Implement and standardize data governance practices, ensuring data quality, lineage tracking, schema consistency, and compliance across pipelines.
  • Collaborate with analytics and engineering teams to manage backfills, resolve schema drift, and implement best practices for incremental loads.
  • Lead the design and implementation of a layered data architecture to improve scalability, governance, and self-service analytics.
  • Develop and implement data contracts by collaborating across teams to align business goals with technical needs.
  • Evaluate, plan, and execute new data tools, infrastructure, and system expansions to support company growth and evolving analytics needs.
  • Deliver scalable, efficient, and maintainable code by applying architectural best practices and adhering to data engineering standards.
  • Maintain SLAs for data freshness, accuracy, and availability by defining clear metrics that foster stakeholder trust and ensure consistent, reliable data delivery.
  • Collaborate with the Data Analytics team to facilitate the delivery of strategic initiatives.

AWSDockerPostgreSQLPythonSQLApache AirflowBashCloud ComputingETLGitKubernetesSnowflakeAlgorithmsData engineeringData StructuresREST APICommunication SkillsAnalytical SkillsCI/CDProblem SolvingRESTful APIsData visualizationAnsibleData modelingData management

Posted 11 days ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 110000.0 - 130000.0 USD per year

🔍 Energy solutions

  • Bachelor's degree in computer science, Physics/Engineering, Business, or Mathematics.
  • Experience building ETL pipelines; real-time pipelines are a plus.
  • Proficiency in Python and SQL.
  • At least 4 years of experience in data warehouse development with a strong foundation in dimensional data modeling.
  • 4+ years of experience in SQL query creation and ETL design, implementation, and maintenance.
  • 4+ years of experience developing data pipelines in Python and with AWS services like S3, Redshift, and RDS.
  • Strong analytical skills with experience handling diverse datasets.
  • Excellent oral, written, and interpersonal communication skills.
  • Detail-oriented with the ability to prioritize and work independently.
  • Collaborate with stakeholders to understand data requirements and deliver data solutions aligned with business objectives.
  • Analyze data sources and design scalable data pipelines and ETL processes with Python, SQL, and AWS technologies.
  • Develop and maintain data warehouses, optimizing data storage and retrieval.
  • Build and populate schemas, automate reporting processes, and document technical specifications, ETL processes, data mappings, and data dictionaries.
  • Support the Data Science Center of Excellence (DSCOE) in data engineering initiatives.

AWSPythonSQLETLData engineeringCommunication SkillsAnalytical Skills

Posted 3 months ago
Apply