We are looking for a Junior and Senior developer to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.
Job Description: Big Data Engineers
- Design, architect and support new and existing data and ETL pipelines and recommend improvements and modifications.
- Create optimal data pipeline architecture and systems using Apache Airflow
- Assemble large, complex data sets that meet functional and non-functional business requirements.
- Be responsible for ingesting data into our data lake and providing frameworks and services for operating on that data including the use of Spark & Databricks.
- Analyze, debug and correct issues with data pipelines
- Operate on or build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Spark and Azure technologies.
Desired Candidate Profile:
- The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment.
- Overall minimum of 3 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge
- Must have 3 years of hands-on working knowledge on Big Data technologies such as Hive, Hbase, Spark, Kafka
- Excellent knowledge in SQL & Linux Shell scripting
- Bachelors/Master’s/Engineering Degree from a well-reputed university.
- Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively
- Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment
- Ability to manage a diverse and challenging stakeholder community
- Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.