Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Bangalore |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | IT - Software |
Functional Area | General / Other Software |
EmploymentType | Full-time |
*Intermediate consulting position operating independently with some assistance and guidance to provide quality work products to a project team or customer that comply with Oracle methodologies and practices. Performs standard duties and tasks with some variation to implement Oracle products and technology to meet customer specifications.Standard assignments are accomplished without assistance by exercising independent judgment, within defined policies and processes, to deliver functional and technical solutions on moderately complex customer engagements.2-5 years of overall experience in relevant functional or technical roles. Undergraduate degree or equivalent experience. Product or technical expertise relevant to practice focus. Ability to communicate effectively and build rapport with team members and clients. Ability to travel as needed., * S.NoDescription14-6 Years of experience in Hadoop ecosystem, Spark and Data Lake - IC22Candidate must be from development background3Proficient in writing SQL, HiveQL code and Data modelling. Hands-on experience on UNIX and basic shell scripting. Knowledge of code migration and version controlling tools like Jenkin, Bitbucket, Github etc.4Ownership of the code and moving/migrating your code into test, UAT and production53+ years of implementation experience in Hadoop in Data Lake environment. Experience in developing Data ingestion and integration flows using Big Data ecosystem tools Hadoop-Hive, Sqoop, Spark, Presto and Hadoop Restful APIs. Must have 1-2 years of experience in core Java or Scala Hands-on experience in Spark Data flows using Java / Scala Proficient in writing SQL, HiveQL code and Data modelling. Hands-on experience on UNIX and basic shell scripting. Knowledge of code migration and version controlling tools like Jenkin, Bitbucket, Github etc.6Conversant in Agile methodology of project execution is preferred7Customer facing experience w.r.to technical discussionsTeam Handling/Generic skills1Self driven , ready to learn and adopt depending on customer / organization needs 2Excellent communication skills3Good working understanding of investment/custodian banking exposure for financial institutions preferable4Should be able to work in Agile model
Keyskills :
big datacore javaproject executionagile methodologyfinancial institutionsimplementation experiencesqluatjavaunixlakesqoopsparkagilebasichadooporaclegithubprestobanking