Displaying search results for ""

Big Data Lead

Resposibilities

• Lead a development team of data engineers
• Implement a big data enterprise data lake, BI and analytical system using Hive LLAP, Spark,
Kafka, Hive, Hbase
• Responsible for design, development, testing oversight and implementation
• Works closely with program manager, scrum master, and architects to convey technical impacts
to development timeline and risks
• Coordinate with data engineers and API developers to drive program delivery.
• Drive technical development and application standards across enterprise data lake & Retail Data
domain
• Benchmark and debug critical issues with algorithms and software as they arise.
• Lead and assist with the technical design and implementation of the Big Data cluster in various
environments.
• Guide/mentor development team for example to create custom common utilities/libraries that can
be reused in multiple big data development efforts.
• Accomplish results by communicating expectations; planning, monitoring, and appraising results;
coaching, counselling, and disciplining employees; developing, coordinating, and enforcing
systems, policies, procedures, and productivity standards.

Skills & Qualification:

• Must have a bachelor’s degree in Computer Science, Engineering, or a related field
• 10+ consecutive years in the software engineering profession
• 4+ years of hands-on experience in design and development of large scale Big Data applications
in platforms such as Hortonworks, Cloudera
• Extensive experience in Java, Spark, Hive, Kafka, Scala, Python technologies
• Must have experience with design, development and deployment in the DevOps environment.
• Should have led on the design and implementation of an E2E Data warehousing implementation
including source system integration and data lake implementation in a Big Data environment
• Must be willing to work in a fast-paced environment with an on shore – offshore distributed Agile
teams.
• Must have the ability to develop and maintain strong collaborative relationships at all levels
across IT and Business Stakeholders.

• Must have the ability to prioritize multiple tasks and deal with urgent requests in a fast-paced and
demanding environment.
• Excellent written and oral communication skills. Adept and presenting complex topics, influencing
and executing with timely / actionable follow-through
• Good understanding of FSLDM or equivalent financial/Banking Datawarehouse models

Desired Characteristics:

• Ability to provide innovative ideas and see through implementation
Extensive experience working with data warehouses and big data platforms
• Demonstrated experience building strong relationships with senior leaders
• Outstanding written and verbal skills and the ability to influence and motivate teams
• Banking domain knowledge will be preferable.
• Have done Customer Facing role in earlier engagements

View

Big Data Engineer

Requirements:

  • The candidate must be a self-starter, who can work under general guidelines in a fast-spaced
    environment.
  • Overall minimum of 3 to 9 year of software development experience and 2 years in Data
    Warehousing domain knowledge
  • Must have at least 3 years of hands-on working knowledge on Big Data technologies such as
    Hive, Hadoop, Hbase, Spark, Nifi, SCALA, Kafka
  • Excellent knowledge in SQL & Linux Shell scripting
  • Bachelors/Master’s/Engineering Degree from a well-reputed university.
  • Strong communication, Interpersonal, Learning and organizing skills matched with the ability to
    manage stress, Time, and People effectively
  • Proven experience in co-ordination of many dependencies and multiple demanding stakeholders
    in a complex, large-scale deployment environment
  • Ability to manage a diverse and challenging stakeholder community
  • Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.

Responsibilities:

  • Responsible for the documentation, design, development, and architecture of Hadoop
    applications
  • Converting hard and complex techniques as well as functional requirements into the detailed
    designs
  • Work as a senior developer/individual contributor based on situations
  • Adhere to SCRUM timeline and deliver accordingly
  • Prepare Unit/SIT/UAT testcase and log the results
  • Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary
    remediation/recommendation in time.
  • Drive small projects individually.
  • Co-ordinate change and deployment in time
View

Big Data Tester

Job Description:

  • Big Data Testing on Hadoop, Hive, etc., using HiveQL, MapReduce or other big data technologies for automated validation
  • Implementing DevOps / CI-CD using Bitbucket, Jenkins, etc.,
  • Implementing Test Automation Frameworks for back-end validation including RDBMS, file, Kafka sources and Big Data targets
  • Working in an Agile environment on Jira, ALM, Confluence, etc.,
  • Big Data technologies (Hadoop, Hive, etc.,) using HiveQL, Java-MapReduce / Pig-Latin / Spark-Python or other big data technologies on Linux
  • Experience with SQL and Shell Scripting experience

Desired Candidate Profile:

  • Ability to solve problems and identify opportunities through analytical thinking.
  • Good knowledge of Advanced MS Excel tools is a must for reporting and analysis.
  • Strong attention to detail and has ability to learn quickly.
  • Excellent communication skills with good command over English language (verbal & written).
  • Ability to work independently and in a team environment.
View

Test Delivery Manager

Responsibilities:

  • Providing strong coverage to Data Validation projects ( ETL, Big Data,  Analytics, AI, Reporting) for the global customers.
  • Managing a team size of 30+ in Data validation projects
  • Demonstrating the improvements in CSAT, Delivery Excellence and Operations KPIs
  • Providing automation expertise.
  • Demonstrate high energy, integrity, and track record of exceeding performance expectations in generating revenue and client relationship management.
View

ETL Developer

Job Description:

  • ETL /Ab Initio Skills: Development experience in GDE, Express-It or any other ETL tool
  • Exposure to Data Analysis
  • Experience in Requirements Analysis, Data Analysis, Application Design, Application Development, Implementations and Testing of Data warehousing business systems.
  • Solid experience in developing Ab Initio applications for Extraction, Transformation, Cleansing and Loading into Data Warehouse/Data mart.
  • Strong experience on developing re-usable generic applications for high-volume DWH environment.
  • Experience on development and maintenance of schemas, Fact Tables, Dimensional Tables (Type 1, Type 2, Type 3), and Surrogate Keys.
  • Extensively used Ab Initio EME data store/sandbox for version control, code promotion, and impact analysis or any other ETL tool
  • Experience in providing Business Intelligence solutions in Data Warehousing in Windows and UNIX platforms.
  • Experience in writing UNIX Shell Scripts.
  • Experience in integration of various data sources with Multiple Relational Databases like Oracle, SQL Server and Worked on integrating data from flat files and spreadsheets.

Desired Candidate Profile:

  • Ability to solve problems and identify opportunities through analytical thinking.
  • Good knowledge of Advanced MS Excel tools is a must for reporting and analysis.
  • Strong attention to detail and has ability to learn quickly.
  • Excellent communication skills with good command over English language (verbal & written).
  • Ability to work independently and in a team environment.
View