Big Data Engineer
- The candidate must be a self-starter, who can work under general guidelines in a fast-spaced
- Overall minimum of 3 to 9 year of software development experience and 2 years in Data
Warehousing domain knowledge
- Must have at least 3 years of hands-on working knowledge on Big Data technologies such as
Hive, Hadoop, Hbase, Spark, Nifi, SCALA, Kafka
- Excellent knowledge in SQL & Linux Shell scripting
- Bachelors/Master’s/Engineering Degree from a well-reputed university.
- Strong communication, Interpersonal, Learning and organizing skills matched with the ability to
manage stress, Time, and People effectively
- Proven experience in co-ordination of many dependencies and multiple demanding stakeholders
in a complex, large-scale deployment environment
- Ability to manage a diverse and challenging stakeholder community
- Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.
- Responsible for the documentation, design, development, and architecture of Hadoop
- Converting hard and complex techniques as well as functional requirements into the detailed
- Work as a senior developer/individual contributor based on situations
- Adhere to SCRUM timeline and deliver accordingly
- Prepare Unit/SIT/UAT testcase and log the results
- Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary
remediation/recommendation in time.
- Drive small projects individually.
- Co-ordinate change and deployment in time