hireejobs
Hyderabad Jobs
Banglore Jobs
Chennai Jobs
Delhi Jobs
Ahmedabad Jobs
Mumbai Jobs
Pune Jobs
Vijayawada Jobs
Gurgaon Jobs
Noida Jobs
Oil & Gas Jobs
Banking Jobs
Construction Jobs
Top Management Jobs
IT - Software Jobs
Medical Healthcare Jobs
Purchase / Logistics Jobs
Sales
Ajax Jobs
Designing Jobs
ASP .NET Jobs
Java Jobs
MySQL Jobs
Sap hr Jobs
Software Testing Jobs
Html Jobs
IT Jobs
Logistics Jobs
Customer Service Jobs
Airport Jobs
Banking Jobs
Driver Jobs
Part Time Jobs
Civil Engineering Jobs
Accountant Jobs
Safety Officer Jobs
Nursing Jobs
Civil Engineering Jobs
Hospitality Jobs
Part Time Jobs
Security Jobs
Finance Jobs
Marketing Jobs
Shipping Jobs
Real Estate Jobs
Telecom Jobs

Hadoop Ecosystem Support and Big Data Engineer_Perm

3.00 to 7.00 Years   Pune   15 Oct, 2020
Job LocationPune
EducationNot Mentioned
SalaryRs 1.5 - 4.5 Lakh/Yr
IndustryIT - Software
Functional AreaGeneral / Other Software
EmploymentTypeFull-time

Job Description

Primarily Skills and Experiences: Extensive experience in Hadoop Ecosystem Concepts, Tools and Modules including Apache ZooKeeper , Apache Solr ,Apache Ambari , Spark, Hadoop Application cluster , YARN,KAFKA Extensive experience in Hadoop Ecosystem architecture and HDFS Must be able to Manage and maintain Hadoop Ecosystem for uninterrupted job Check, back-up, and monitor the entire system, routinely Take end to end responsibility of the Hadoop Life cycle Handle the installation, configuration, and supporting of Hadoop Ecosystem Write Map Reduce coding for Hadoop clusters; help to build new Hadoop clusters Manage and monitor Hadoop log files Ensure that the connectivity and network are always up and running Plan for capacity upgrading or downsizing as and when the need arises Manage HDFS and ensure that it is working optimally at all times. Secure the Hadoop ecosystem in a foolproof manner Responsible for the documentation, and architecture of Hadoop applications Maintain data security and privacy Regulate the administration rights depending on the job profile of users Add new users over time and discard redundant users smoothly Have full knowledge of HBase for efficient Hadoop administration Be proficient in Linux scripting . Advanced working knowledge and experience working with relational databases , query authoring , as well as working familiarity with a variety of databases, previous experience working with PostgreSQL and MongoDB. Experience building and optimizing big data data pipelines, architectures and data sets. Create and maintain optimal data pipeline architecture Assemble large, complex data sets that meet functional / non-functional business requirements Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Develop data mining architecture, data modeling standards, and more A successful history of manipulating, processing and extracting value from large disconnected datasets. Working knowledge of message queuing, stream processing, and highly scalable big data data stores. To design web applications for querying data and swift data tracking at higher speeds Experience supporting and working with cross-functional teams in a dynamic environment Key Skills: Primarily Skills and Experiences: Extensive experience in Hadoop Ecosystem Concepts, Tools and Modules including Apache ZooKeeper , Apache Solr ,Apache Ambari , Spark, Hadoop Application cluster , YARN,KAFKA Extensive experience in Hadoop Ecosystem architecture and HDFS Must be able to Manage and maintain Hadoop Ecosystem for uninterrupted job Check, back-up, and monitor the entire system, routinely Take end to end responsibility of the Hadoop Life cycle Handle the installation, configuration, and supporting of Hadoop Ecosystem Write Map Reduce coding for Hadoop clusters; help to build new Hadoop clusters Manage and monitor Hadoop log files Ensure that the connectivity and network are always up and running Plan for capacity upgrading or downsizing as and when the need arises Manage HDFS and ensure that it is working optimally at all times. Secure the Hadoop ecosystem in a foolproof manner Responsible for the documentation, and architecture of Hadoop applications Maintain data security and privacy Regulate the administration rights depending on the job profile of users Add new users over time and discard redundant users smoothly Have full knowledge of HBase for efficient Hadoop administration Be proficient in Linux scripting . Advanced working knowledge and experience working with relational databases , query authoring , as well as working familiarity with a variety of databases, previous experience working with PostgreSQL and MongoDB. Experience building and optimizing big data data pipelines, architectures and data sets. Create and maintain optimal data pipeline architecture Assemble large, complex data sets that meet functional / non-functional business requirements Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Develop data mining architecture, data modeling standards, and more A successful history of manipulating, processing and extracting value from large disconnected datasets. Working knowledge of message queuing, stream processing, and highly scalable big data data stores. To design web applications for querying data and swift data tracking at higher speeds Experience supporting and working with cross-functional teams in a dynamic environment,

Keyskills :
javahivebig datahadoopapache webserverroot cause analysisroot causedata miningmusic makingdata modelingdata securitydata trackingdata structuresapache zookeeperweb applications

Hadoop Ecosystem Support and Big Data Engineer_Perm Related Jobs

© 2019 Hireejobs All Rights Reserved