Apply

Data Engineer

Posted about 1 month agoViewed

View full description

💎 Seniority level: Middle, 4+ years

📍 Location: Egypt

🔍 Industry: Technology, Information, and Media

🏢 Company: Envision Employment Solutions👥 11-50ConsultingHuman ResourcesRecruiting

🗣️ Languages: English

⏳ Experience: 4+ years

Requirements:
  • 4+ years of experience in a relevant role.
  • Strong knowledge of data architectures, data warehousing, databases, and data modelling techniques
  • Strong knowledge/familiarity with Databricks or similar tools for building and orchestrating data processing flows using Spark.
  • Strong knowledge of Spark Streaming and/or other streaming frameworks for real or near real time data processing
  • Proficiency in Python, SQL, and other programming languages and frameworks used for data engineering (e.g. Scala, Go, Java…)
  • Experience in working with cloud platforms such as Azure, and using their services and tools for data storage, processing, and analysis
  • Experience in working with text data and applying data preprocessing, cleaning, and analysis techniques, such as tokenization, lemmatization, stemming, stop words removal, etc.
  • Familiarity with natural language processing, topic modelling, and AI for text analysis techniques and applications, such as sentiment analysis, text summarization, text classification, named entity recognition, topic extraction, text generation, etc.
  • Experience in web development, specifically backend RESTful webservice development to expose data programmatically is desirable but not required
  • Experience in using NLP and ML tools and libraries, such as spaCy, NLTK, TensorFlow, PyTorch, scikit-learn, pandas, etc.
Responsibilities:
  • Build scalable and reliable data pipelines that collect, process, and store text data from various sources.
  • Collaborate with data scientists and other engineers to support natural language processing, topic modelling, and AI for text analysis projects.
  • Consume scoring endpoints that classify, translate, and qualify text entries.
Apply

Related Jobs

Apply
🔥 Senior Data Engineer
Posted about 4 hours ago

💸 135300.0 - 202950.0 USD per year

🔍 Software Development

  • 4+ years of professional experience working in data warehousing, data architecture, and/or data engineering environments, especially using spark, hadoop, hive etc with solid understanding of streaming pipelines.
  • Experience with Airflow, AWS cloud tech stack, any data movement tools such as Fivetran, Scala/Python.
  • You have built large-scale data products and understand the tradeoffs made when building these features
  • BS / MS in Computer Science, Engineering, Mathematics, or a related field
  • Contribute to the design/architecture new initiatives such as real time streaming pipelines, tooling around data governance, build job orchestration abstractions to manage resources on AWS
  • Collaborate with the team to build tools for data science/marketing teams
  • Design integration pipelines for new data sources and improve existing pipelines to perform efficiently at scale
  • Provide technical guidance to the team
  • Leverage best practices in continuous integration and deployment to our cloud-based infrastructure
  • Optimize data access and consumption for our business and product colleagues
Posted about 4 hours ago
Apply
Apply
🔥 Data Engineer [FDE Team]
Posted about 6 hours ago

🧭 Full-Time

🔍 Software Development

🏢 Company: Sweed👥 101-250CannabisE-CommerceWeb DevelopmentMarketingInformation TechnologySoftware

  • Strong SQL skills – ability to write and optimize complex queries.
  • Proficiency in Python
  • Experience handling and cleaning large, messy datasets.
  • Basic web development knowledge – understanding of front-end and back-end principles.
FDEs focus on Implementation, Architecture, and Support, balancing client-facing responsibilities with internal tool development and system maintenance.
Posted about 6 hours ago
Apply
Apply

📍 United States

💸 64000.0 - 120000.0 USD per year

  • Strong PL/SQL, SQL development skills
  • Proficient in multiple languages used in data engineering such as Python, Java
  • Minimum 3-5 years of experience in Data engineering working with Oracle and MS SQL
  • Experience with data warehousing concepts and technologies including cloud-based services (e.g. Snowflake)
  • Experience with cloud platforms like Azure and knowledge of infrastructure
  • Experience with data orchestration tools (e.g. Azure Data Factory, DataBricks workflows)
  • Understanding of data privacy regulations and best practices
  • Experience working with remote teams
  • Experience working on a team with a CI/CD process
  • Familiarity using tools like Git, Jira
  • Bachelor's degree in Computer Science or Computer Engineering
  • Design, implement and maintain scalable pipelines and architecture to collect, process, and store data from various sources.
  • Unit test and document solutions that meet product quality standards prior to release to QA.
  • Identify and resolve performance bottlenecks in pipelines due to data, queries and processing workflows to ensure efficient and timely data delivery.
  • Implement data quality checks and validations processes to ensure accuracy, completeness and consistency of data delivery.
  • Work with Data Architect and implement best practices for data governance, quality and security.
  • Collaborate with cross-functional teams to identify and address data needs.
  • Ensure technology solutions support the needs of the customer and/or organization.
  • Define and document technical requirements.

PythonSQLETLGitJavaOracleSnowflakeAzureData engineeringCI/CDRESTful APIs

Posted about 17 hours ago
Apply
Apply

🔍 AI and data analytics consulting

🏢 Company: Unit8 SA

  • Proficient software engineer who knows the fundamentals of computer science and you master at least one widely adopted programming language (Python, Java, C#, C++).
  • Know how to write distributed services and work with high-volume heterogeneous data, preferably with distributed systems such as Spark.
  • Knowledgeable about data governance, data access, and data storage techniques.
  • Experience with cloud technologies is a strong plus.
  • Good presentation and communication skills, with the ability to explain complex analytical concepts to people from other fields.
  • Strong client-facing skills: comfortable interacting with clients (business & technical audience), delivering presentations, problem-solving mindset.
  • Design, build, maintain, and troubleshoot data pipelines and processing systems that are relied on for both production and analytics applications, using a variety of open-source and closed-source technologies.
  • Implement best practices in CI/CD and help drive optimization, testing, and tooling to improve data quality.
  • Contribute to the implementation and engineering of systems at different scales: from small proof-of-concepts to larger end-to-end data systems.
  • Work with our customers to understand their challenges, design and implement solutions.
  • Collaborate with other software engineers, ML experts, and business stakeholders, taking learning and leadership opportunities that will arise every single day.
  • Work in multi-functional agile teams to continuously experiment, iterate and deliver on new product objectives.
Posted about 21 hours ago
Apply
Apply
🔥 Data Engineer
Posted about 22 hours ago

🔍 AI

🏢 Company: Prolific

  • 2+ years of hands on experience deploying production quality code with proficiency in Python for data processing and related packages.
  • Deep understanding of SQL and analytical data warehouses (Snowflake, Redshift preferred) with proven experience implementing ETL/ELT best practices at scale.
  • Hands on experience with data pipeline tools (Airflow, dbt) and strong ability to optimise for performance and reliability.
  • Ability to design and develop robust data APIs and services that expose data to applications, bridging analytical and operational systems.
  • Strong data modelling skills and familiarity with the Kimball methodology to create efficient, scalable data structures.
  • Commitment to continuously improving product quality, security, and performance through rigorous testing and code reviews.
  • Meticulous approach to creating and maintaining architecture and systems documentation.
  • Ability to work across teams to understand and address diverse data needs while maintaining data integrity.
  • Desire to continually keep up with advancements in data engineering practices and technologies.
  • Exceptional analytical skills to troubleshoot complex data issues and implement effective solutions.
  • Capability to ship medium features independently while contributing to the team's overall objectives.
  • Build and maintain robust data pipelines from internal databases and SaaS applications ensuring timely and accurate data delivery.
  • Maintain our data warehouse with high-quality, well-structured data that supports analytics and business operations.
  • Design and implement scalable data infrastructure that accommodates our growing data volume and complexity.
  • Create and maintain APIs and micro services that expose data to applications, enabling seamless integration between data systems and business applications.
  • Establish processes and tools to monitor data quality, identify issues, and implement fixes promptly.
  • Create and maintain comprehensive documentation of data flows, models, and systems for knowledge sharing.
  • Work closely with analytics, research, and product teams to ensure their data needs are addressed effectively.
  • Implement and advocate for data engineering best practices across the organisation.
  • Plan and execute system expansion as needed to support the company's growth and evolving analytical needs.
  • Continuously optimise data pipelines and warehouse performance to improve efficiency and reduce costs.
  • Ensure all data systems adhere to security best practices and compliance requirements.
Posted about 22 hours ago
Apply
Apply
🔥 Senior Data Engineer (R12931)
Posted about 22 hours ago

🔍 Fintech

🏢 Company: Oportun👥 1001-5000💰 $235,000,000 Post-IPO Debt 5 months ago🫂 Last layoff over 1 year agoDebit CardsConsumer LendingFinancial ServicesFinTech

  • Bachelor’s degree and a minimum of 5+ years of related experience; or an advanced degree with relevant internships.
  • Expertise to work with data technologies like Databricks, Alation, Microsoft Excel, and various data sources like MariaDB, AWS Redshift.
  • Proficiency in data query and analysis tools, programming languages e.g., SQL, Python, R; and data visualization tools e.g., Tableau, Domo, Power BI.
  • Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information, with the ability to identify and address system issues and data discrepancies.
  • Ability to work independently and manage multiple projects simultaneously.
  • Strong communication and interpersonal skills to effectively collaborate with stakeholders at all levels.
  • Detail-oriented and highly organized with a focus on delivering results.
  • Strong customer service orientation, with the ability to provide timely and effective support to system users.
  • Good working proficiency with data technologies like Databricks / Snowflake / any large-scale data lake.
  • Good working proficiency with dashboarding tools like Domo / Power BI or equivalent.
  • Knowledge of data governance tools like Alation / DBT
  • Advanced knowledge and experience with Microsoft Excel.
  • Perform in-depth support for all data topics related to Oportun's financial products.
  • Collaborate with product managers, engineers, and designers to contribute to the development and improvement of financial products.
  • Create and maintain detailed reports and dashboards to track product performance and KPIs.
  • Work closely with various teams, including marketing, compliance, and operations, to align product strategies with business goals.
  • Document findings, insights, and recommendations for product enhancements and communicate them effectively to stakeholders.
Posted about 22 hours ago
Apply
Apply
🔥 Data Engineer
Posted 1 day ago

📍 United States

🧭 Full-Time

🔍 Sustainable Agriculture

🏢 Company: Agrovision

  • Experience with RDBMS (e.g., Teradata, MS SQL Server, Oracle) in production environments is preferred
  • Hands-on experience in data engineering and databases/data warehouses
  • Familiarity with Big Data platforms (e.g., Hadoop, Spark, Hive, HBase, Map/Reduce)
  • Expert level understanding of Python (e.g., Pandas)
  • Proficient in shell scripting (e.g., Bash) and Python data application development (or similar)
  • Excellent collaboration and communication skills with teams
  • Strong analytical and problem-solving skills, essential for tackling complex challenges
  • Experience working with BI teams and tooling (e.g. PowerBI), supporting analytics work and interfacing with Data Scientists
  • Collaborate with data scientists to ensure high-quality, accessible data for analytical and predictive modeling
  • Design and implement data pipelines (ETL’s) tailored to meet business needs and digital/analytics solutions
  • Enhance data integrity, security, quality, and automation, addressing system gaps proactively
  • Support pipeline maintenance, troubleshoot issues, and optimize performance
  • Lead and contribute to defining detailed scalable data models for our global operations
  • Ensure data security standards are met and upheld by contributors, partners and regional teams through programmatic solutions and tooling

PythonSQLApache HadoopBashETLData engineeringData scienceRDBMSPandasSparkCommunication SkillsAnalytical SkillsCollaborationProblem SolvingData modeling

Posted 1 day ago
Apply
Apply

📍 LATAM

🧭 Full-Time

🔍 Financial Services

🏢 Company: South Geeks👥 101-250Web DevelopmentSoftware EngineeringEnterprise SoftwareSoftware

  • Proficiency in administering one or more platforms: Snowflake (required), DBT (required), GitHub, Workato (and/or Made), or other tools (preferred).
  • Data engineering experience with a focus on DBT and Snowflake.
  • Strong desire to expand skills across multiple platforms and contribute to both administrative and engineering functions.
  • Comfort with dynamic, multi-role environments that blend administration with engineering work.
  • Collaborative mindset, able to thrive in a team-first, cross-functional setting.
  • Administer one or more platforms, focusing on: Snowflake, DBT, Fivetran, and other tools: GitHub, Workato (and/or Make), and High Touch.
  • Participate in cross-training to administer multiple platforms, ensuring seamless coverage across the team.
  • Collaborate on data engineering projects, using DBT and Snowflake as part of the stack.
  • Take part in development opportunities and engineering work to broaden your expertise and career path.
  • Focus on team-based service delivery, rather than individual responsibilities, ensuring all systems are effectively managed by the group.

SQLSnowflakeData engineering

Posted 1 day ago
Apply
Apply
🔥 Sr. Data Engineer
Posted 2 days ago

🔍 Software Development

  • Experience with AWS Big Data services
  • Experience with Snowflake
  • Experience with dbt
  • Experience with Python
  • Experience with distributed systems
  • Design and build scalable data systems that support advanced analytics and business intelligence.
  • Develop and maintain data pipelines.
  • Implement data management best practices.
  • Work closely with Product Managers, Data Analysts, and Software Developers to support data-driven decision-making.
Posted 2 days ago
Apply
Apply

📍 Lithuania

💸 4000.0 - 6000.0 EUR per month

🔍 Software Development

🏢 Company: Softeta

  • 4+ years of experience as a Data Engineer
  • Experience with Azure (Certifications are a Plus)
  • Experience with Databricks, Azure Data Lake, Data Factory and Apache Airflow
  • CI/CD or infrastructure as code
  • Knowledge of Medallion Architecture or Multihop architecture
  • Experience developing and administering ETL processes in the Cloud (Azure, AWS or GCP) environment
  • Strong programming skills in Python and SQL
  • Strong problem-solving and analytical skills
  • Design, develop, and maintain data pipelines and ETL processes
  • Data modeling, data cleansing
  • Automating data processing workflows using tools such as Airflow or other workflow management tools
  • Optimizing the performance of databases, including designing and implementing data structures and using indexes appropriately
  • Implement data quality and data governance processes
  • Being a data advocate and helping unlock business value by using data

PythonSQLApache AirflowETLAzureData engineeringCI/CDData modeling

Posted 2 days ago
Apply

Related Articles

Posted 14 days ago

Why remote work is such a nice opportunity?

Why is remote work so nice? Let's try to see!

Posted 7 months ago

Insights into the evolving landscape of remote work in 2024 reveal the importance of certifications and continuous learning. This article breaks down emerging trends, sought-after certifications, and provides practical solutions for enhancing your employability and expertise. What skills will be essential for remote job seekers, and how can you navigate this dynamic market to secure your dream role?

Posted 7 months ago

Explore the challenges and strategies of maintaining work-life balance while working remotely. Learn about unique aspects of remote work, associated challenges, historical context, and effective strategies to separate work and personal life.

Posted 7 months ago

Google is gearing up to expand its remote job listings, promising more opportunities across various departments and regions. Find out how this move can benefit job seekers and impact the market.

Posted 7 months ago

Learn about the importance of pre-onboarding preparation for remote employees, including checklist creation, documentation, tools and equipment setup, communication plans, and feedback strategies. Discover how proactive pre-onboarding can enhance job performance, increase retention rates, and foster a sense of belonging from day one.