5+ years minimum experience in language such as Java, Scala, PySpark, Perl, Shell Scripting and Python
Working knowledge of the Hadoop ecosystem applications (MapReduce, YARN, Pig, Hbase, Hive, Spark and more!)
Strong Experience working with data pipelines in multi-terabyte data warehouses. Experience in dealing with performance and scalability issues
Strong SQL (MySQL, Hive, etc.) and No-SQL (MongoDB, Hbase, etc.) skills, including writing complex queries and performance tuning
Knowledge of data modeling, partitioning, indexing, and architectural database design.
Experience using Source Code and Version Control systems like GIT etc.
Experience on continuous build and test process using tools such as GitLab, SBT, Postman, etc.
Experience with Search Engines, Name/Address Matching, or Linux text processing
Responsibilities:
Implement and maintain big data platform and infrastructure
Develop, optimize and tune MySQL stored procedures, scripts, and indexes
Develop Hive schemas and scripts, Spark Jobs using pyspark and Scala and UDFs in Java
Design, develop and maintain automated, complex, and efficient ETL processes to do batch records-matching of multiple large-scale datasets, including supporting documentation
Develop and maintains pipelines using Airflow or any other tools to monitor, debug, and analyze data pipelines
Troubleshoot Hadoop cluster and query issues, evaluate query plans, and optimize schemas and queries
Strong interpersonal skills to resolve problems in a professional manner, lead working groups, and negotiate consensus