hireejobs
Hyderabad Jobs
Banglore Jobs
Chennai Jobs
Delhi Jobs
Ahmedabad Jobs
Mumbai Jobs
Pune Jobs
Vijayawada Jobs
Gurgaon Jobs
Noida Jobs
Oil & Gas Jobs
Banking Jobs
Construction Jobs
Top Management Jobs
IT - Software Jobs
Medical Healthcare Jobs
Purchase / Logistics Jobs
Sales
Ajax Jobs
Designing Jobs
ASP .NET Jobs
Java Jobs
MySQL Jobs
Sap hr Jobs
Software Testing Jobs
Html Jobs
IT Jobs
Logistics Jobs
Customer Service Jobs
Airport Jobs
Banking Jobs
Driver Jobs
Part Time Jobs
Civil Engineering Jobs
Accountant Jobs
Safety Officer Jobs
Nursing Jobs
Civil Engineering Jobs
Hospitality Jobs
Part Time Jobs
Security Jobs
Finance Jobs
Marketing Jobs
Shipping Jobs
Real Estate Jobs
Telecom Jobs

Hadoop (spark) development

12.00 to 16.00 Years   Bangalore   12 Jun, 2019
Job LocationBangalore
EducationNot Mentioned
SalaryNot Disclosed
IndustryIT - Software
Functional AreaGeneral / Other Software
EmploymentTypeFull-time

Job Description

Job Responsibilities:Job Summary:We are looking for bright, driven, and talented individuals to join our team of passionate and innovative software engineers. In this role, you ll use your experience with Java/Scala, Spark, Big Data and Streaming technologies to build a lending platform based of data lakeJob Duties :

  • Developing and deploying distributed computing Big Data applications using Apache Spark on MapR Hadoop (others hortonworks / Cloudera will work as well)
  • Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Nexus, Ansible, Teraform, Git and Docker
  • Help drive cross team design / development via technical leadership / mentoring
  • Work with business partners to develop business rules and business rule execution
  • Perform process improvement and re-engineering with an understanding of technical problems and solutions as they relate to the current and future business environment.
  • Design and develop innovative solutions for demanding business situations
  • Analyze complex distributed production deployments, and make recommendations to optimize performance
Essential skills:
  • At least 9 years of professional work experience programming in Java or Scala (3+)
  • 3 or more years of experience with the Hadoop Stack
  • 2+ years of Distributed Computing frameworks such as Apache Spark, Hadoop
  • Experience with Elasticsearch, Spark (plus)
  • Experience with database and ETL development
  • Strong knowledge of Object Oriented Analysis and Design, Software Design Patterns and Java coding principles
  • Experience with Core Java development preferred
  • Familiarity with Agile engineering practices
  • Proficiency with MapR Hadoop distribution components and custom packages is a huge plus
  • Proven understanding and related experience with Hadoop, HBase, Hive, Pig, Sqoop, Flume, Hbase and/or Map/Reduce
  • Excellent RDBMS (Oracle, SQL Server) knowledge for development using SQL/PL SQL
  • Solid UNIX OS and Shell Scripting skills
  • Strong initiative with the ability to identify areas of improvement with little direction
  • Team-player excited to work in a fast-paced environment
  • Agile experience preferred
,

Keyskills :
softwareflumecorelendingscriptingjavamentoringtoolsagilesqlprogrammingriventestshelldevelopment

Hadoop (spark) development Related Jobs

© 2019 Hireejobs All Rights Reserved