Apply📍 United States
🧭 Full-Time
💸 108000.0 - 162000.0 USD per year
🔍 Insurance
🏢 Company: Openly👥 251-500💰 $100,000,000 Series D over 1 year agoLife InsuranceProperty InsuranceInsuranceCommercial InsuranceAuto Insurance
- 1 to 2 years of data engineering and data management experience.
- Scripting skills in one or more of the following: Python.
- Basic understanding and usage of a development and deployment lifecycle, automated code deployments (CI/CD), code repositories, and code management.
- Experience with Google Cloud data store and data orchestration technologies and concepts.
- Hands-on experience and understanding of the entire data pipeline architecture: Data replication tools, staging data, data transformation, data movement, and cloud based data platforms.
- Understanding of a modern next generation data warehouse platform, such as the Lakehouse and multi-data layered warehouse.
- Proficiency with SQL optimization and development.
- Ability to understand data architecture and modeling as it relates to business goals and objectives.
- Ability to gain an understanding of data requirements, translate them into source to target data mappings, and build a working solution.
- Experience with terraform preferred but not required.
- Design, create, and maintain data solutions. This includes data pipelines and data structures.
- Work with data users, data science, and business intelligence personnel, to create data solutions to be used in various projects.
- Translating concepts to code to enhance our data management frameworks and services to strive towards providing a high quality data product to our data users.
- Collaborate with our product, operations, and technology teams to develop and deploy new solutions related to data architecture and data pipelines to enable a best-in-class product for our data users.
- Collaborating with teammates to derive design and solution decisions related to architecture, operations, deployment techniques, technologies, policies, processes, etc.
- Participate in domain, stand ups, weekly 1:1's, team collaborations, and biweekly retros
- Assist in educating others on different aspects of data (e.g. data management best practices, data pipelining best practices, SQL tuning)
- Build and share your knowledge within the data engineer team and with others in the company (e.g. tech all-hands, tech learning hour, domain meetings, code sync meetings, etc.)
DockerPostgreSQLPythonSQLApache AirflowCloud ComputingETLGCPKafkaKubernetesData engineeringGoREST APICI/CDTerraformData modelingScriptingData management
Posted 2 days ago
Apply