Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Bangalore |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | Management Consulting / Strategy |
Functional Area | General / Other Software |
EmploymentType | Full-time |
Job description:Develop analytic tools, working on BigData and Distributed Environment. Scalability will be the keyUnderstand product and provide end to end design and implementationCompletely or partly own the complete module and take ownership to take that module from design, development, deployment, qa and finally to production release and supportLiaise with other team members to conduct load and performance testing on your modulesPerform product and technology assessments whenever neededVisualize and evangelize next generation infrastructure in Big Data space (Batch, Near RealTime, RealTime technologies).Passionate for continuous learning, experimenting, applying and contributing towards cutting edgeopen source technologies and software paradigmsProvide strong technical expertise (performance, application design, stack upgrades) to lead PlatformEngineeringProvide technical leadership and be a role model to data engineers pursuing technical career path inengineeringProvide/ inspire innovations that fuel the growth of the companyExperience:Minimum 4 years of strong experience on Core Java, Hadoop ecosystem and any NoSQL Database.Minimum 2.5 or 3 Years of strong experience on Spark/ Storm/ Cassandr/ Kafka/ Scala.Technical/ Functional Skills:Core Java, Multi- Threading, OOPS, Writing ParsersHadoop/ Hive/ Pig/ MapReduceSpark/ Storm/ Kafka/ Scala/ CassandraCloud Computing(AWS/ Azure etc)Strong on Core Java, Multi- Threading, OOPS Concept, writing parsers in Core JavaShould have strong knowledge on Hadoop ecosystem such as Hive/ Pig/ MapReduceStrong in SQL, NoSQL, RDBMS and Data warehousing conceptsWriting complex MapReduce programsShould have strong experience on pipeline building such Spark or Storm or Cassandra or Scala.Designing efficient and robust ETL workflowsGather and process raw data at scale (including writing scripts, web scraping, calling APIs, write SQLqueries, etc.).Tuning Hadoop solutions to improve performance and end- user experience;Processing unstructured data into a form suitable for analysis and then do the analysis.Creating Big Data reference architecture deliverablePerformance optimization in a Big Data environmentGeneric Leadership Skills:Should have prior customer facing experience.Ability to lead all requirement gathering sessions with the CustomerStrong co- ordination and interpersonal skills to handle complex projects.Education:BE/ B.Tech, ME/ M.Tech, MS, MCA (with an aggregate of 75% and above)Experience: 4 8 yrsSalary: As per industry standardLocation: Bangalore,
Keyskills :
bigdata rolemodel opensource webscraping userexperience datawarehousing unstructureddata applicationdesign interpersonalskills technicalleadership requirementgathering java perf mancetesting