We are looking for an experienced Hadoop/Spark developer for to help design, develop, and secure Big Data solutions who has broad understanding of Big Data tools and technologies and the fundamentals of how they operate.
- Design and develop applications utilizing the Bigdata technologies like Hive, Spark, HBase and Hadoop Frameworks.
- Read, extract, transform, stage and load data to multiple targets, including Hadoop and other databases
- Translate complex functional and technical requirements into detailed design.
- Migrate existing data processing from standalone or legacy technology scripts to Hadoop framework processing.
- Identify and apply performance tuning in Hive, Spark, Hbase and Kafka
- Perform POC deployments and conversions
- Maintain security and data privacy.
- Propose best practices/standards.
- 5 years designing and developing Enterprise-level data, integration, and reporting/analytic solutions. Proven track record of delivering backend systems that participate in a complex ecosystem
- Minimum 3 years development experience on Big Data/Hadoop platform including Hive, Spark, Sqoop, Hbase, Kafka and related tools
- Experience working with multiple Hadoop distributions (eg. Hortonworks, MapR)
- Current knowledge of Unix/Linux/python scripting, solid experience in code optimization and high performance computing.
- Scala Development is a big plus
- Strong written and verbal communication skills
- Excellent analytical and problem-solving skills