hireejobs
Hyderabad Jobs
Banglore Jobs
Chennai Jobs
Delhi Jobs
Ahmedabad Jobs
Mumbai Jobs
Pune Jobs
Vijayawada Jobs
Gurgaon Jobs
Noida Jobs
Oil & Gas Jobs
Banking Jobs
Construction Jobs
Top Management Jobs
IT - Software Jobs
Medical Healthcare Jobs
Purchase / Logistics Jobs
Sales
Ajax Jobs
Designing Jobs
ASP .NET Jobs
Java Jobs
MySQL Jobs
Sap hr Jobs
Software Testing Jobs
Html Jobs
IT Jobs
Logistics Jobs
Customer Service Jobs
Airport Jobs
Banking Jobs
Driver Jobs
Part Time Jobs
Civil Engineering Jobs
Accountant Jobs
Safety Officer Jobs
Nursing Jobs
Civil Engineering Jobs
Hospitality Jobs
Part Time Jobs
Security Jobs
Finance Jobs
Marketing Jobs
Shipping Jobs
Real Estate Jobs
Telecom Jobs

Principal SW Engineer (Big Data)

4.00 to 5.00 Years   Hyderabad   18 Jul, 2019
Job LocationHyderabad
EducationNot Mentioned
SalaryNot Disclosed
IndustryTelecom / ISP
Functional AreaIT Operations / EDP / MISNetwork / System Administration
EmploymentTypeFull-time

Job Description

Responsibilities: The candidate will be responsible for designing, capacity arrangement, cluster set up, performance fine-tuning, monitoring, structure planning, scaling and administration. In addition, the candidate will routinely work closely with infrastructure, network, database, business intelligence and application teams to ensure business applications are performing within agreed on service levels.

  • Serve as the SME and technical liaison between clients and Neustar Services, providing support through multi-phased delivery projects.
  • Providing support and knowledge of AWS Infrastructure and Services Ensuring access security is maintained while supporting appropriate end-user access and functionality
  • Deliver product trainings and walkthroughs for clients.
  • Gather business and technical requirements in order to create and present a comprehensive solution and implementation plan for client delivery.
  • Design and complete implementation of technical data integrations, identifying cadence, delivery methods, quality control, etc.
  • Building, administering, maintaining, and scaling a big data platform based on the Hadoop ecosystem with primary use cases around supporting data science, reporting, and other data-driven functionality across the organization
  • Implement concepts of Hadoop eco system such as YARN, MapReduce, HDFS, HBase, Spark, Zookeeper, Pig and Hive.
  • Accountable for storage, performance tuning and volume management of Hadoop clusters and MapReduce routines.
  • Configuration and security for Hadoop clusters using best practices.
  • Monitor Hadoop cluster connectivity and performance.
  • Manage and analyze Hadoop log files.
  • File system management and monitoring.
  • Develop and document best practices
  • HDFS support and maintenance.
  • Responsible for the new and existing administration of Hadoop infrastructure.
  • Design and implement solutions that provide disaster recovery with minimal RTO and RPO
Skills and Knowledge:
  • Thorough understanding of the Hadoop infrastructure and solid theoretical knowledge of big data skills; skilled in ability to see the big picture and conceptualize and document creative solutions
  • Excellent oral and written communications skills
  • Strong interpersonal skills at all levels of management and ability to motivate employees/teams to apply skills and techniques to solve dynamic problems; excellent teamwork skills
  • Ability to weigh various suggested technical solutions against the original business needs and choose the most cost-effective solution
  • Strong Problem Solving and creative thinking skills
  • Ability to multi-task and manage multiple assignments simultaneously
  • Experience working with geographically distributed teams
Experience:
  • Bachelor s degree or better in Computer Science, Engineering or related discipline
  • 6+ years in-depth experience in administration and support of Enterprise Hadoop (Community, Cloudera or Hortonworks)
  • 6+ years of experience with distributed scalable Big Data store or NoSQL, including Accumulo, Cloudbase, HBase, or Big Table
  • 4+ years extensive experience in at least 3 of the following technologies: MapReduce, Spark, Hive, Sqoop and Pig
  • 6+ years of operating knowledge of various UNIX environments, specifically RedHat and CentOS
  • 6+ years UNIX shell scripting experience
  • Demonstrated experience working well with customers of varying levels of technical expertise in high-pressure situations and complex environments
  • Experience in MapReduce programming with Apache Hadoop and Hadoop Distributed File System (HDFS) and with processing large data stores
  • Experience with the design and development of multiple object-oriented systems
  • Knowledge of scripting languages (Python, Bash, Ruby, Perl) and at least one SQL variant
  • Experience with using repository management solutions
  • Experience with deploying applications in a Cloud environment
,

Keyskills :
reportingsqlcommunicationadministrationsql

Principal SW Engineer (Big Data) Related Jobs

© 2019 Hireejobs All Rights Reserved