hireejobs
Hyderabad Jobs
Banglore Jobs
Chennai Jobs
Delhi Jobs
Ahmedabad Jobs
Mumbai Jobs
Pune Jobs
Vijayawada Jobs
Gurgaon Jobs
Noida Jobs
Oil & Gas Jobs
Banking Jobs
Construction Jobs
Top Management Jobs
IT - Software Jobs
Medical Healthcare Jobs
Purchase / Logistics Jobs
Sales
Ajax Jobs
Designing Jobs
ASP .NET Jobs
Java Jobs
MySQL Jobs
Sap hr Jobs
Software Testing Jobs
Html Jobs
IT Jobs
Logistics Jobs
Customer Service Jobs
Airport Jobs
Banking Jobs
Driver Jobs
Part Time Jobs
Civil Engineering Jobs
Accountant Jobs
Safety Officer Jobs
Nursing Jobs
Civil Engineering Jobs
Hospitality Jobs
Part Time Jobs
Security Jobs
Finance Jobs
Marketing Jobs
Shipping Jobs
Real Estate Jobs
Telecom Jobs

Software Engineer II (Big Data)

3.00 to 5.00 Years   Hyderabad   01 Jun, 2019
Job LocationHyderabad
EducationNot Mentioned
SalaryNot Disclosed
IndustryTelecom / ISP
Functional AreaEmbedded / System Software
EmploymentTypeFull-time

Job Description

The candidate will be responsible for ingesting, storing, validating and transforming data in a consumable format for business intelligence teams and data analysts to get deeper business insight from the data. Design and implement Big Data analytic solutions on a Hadoop based platform. Create custom analytic and data mining algorithms to help extract knowledge and meaning from vast stores of data. Refine a data processing pipeline focused on unstructured and semi structured data refinement. In addition, the candidate will routinely work closely with infrastructure, network, database, business intelligence and application teams to ensure business applications are performing within agreed on service levels.

  • Design and develop ETL pipeline for collecting, validating and transforming data according to the specification
  • Gather business and technical requirements in order to create and present a comprehensive solution and implementation plan for client delivery.
  • Design and complete implementation of technical data integrations, identifying cadence, delivery methods, quality control, etc.
  • Building, administering, maintaining, and scaling a big data platform based on the Hadoop ecosystem with primary use cases around supporting data science, reporting, and other data-driven functionality across the organization
  • Implement concepts of Hadoop eco system such as YARN, MapReduce, HDFS, HBase, Spark, Zookeeper, Pig and Hive.
  • Configuration and security for Hadoop clusters using best practices.
  • Develop unit tests and performance tests
  • Design ETL jobs for optimal execution in AWS cloud environment
  • Reduce processing time and cost of ETL workloads
  • Participate in peer reviews and design/code review meetings
  • Provide support for production support operations team
  • Implement data quality checks.
  • Identify areas where machine learning can be used to identify data anomalies
  • Monitor Hadoop cluster connectivity and performance.
  • Manage and analyze Hadoop log files.
  • File system management and monitoring.
  • Develop and document best practices
  • HDFS support and maintenance.
  • Responsible for the new and existing administration of Hadoop infrastructure.
Skills and Knowledge:
  • Thorough understanding of the Hadoop infrastructure and solid theoretical knowledge of big data skills; skilled in ability to see the big picture and conceptualize and document creative solutions
  • Excellent oral and written communications skills
  • Strong interpersonal skills at all levels of management and ability to motivate employees/teams to apply skills and techniques to solve dynamic problems; excellent teamwork skills
  • Ability to weigh various suggested technical solutions against the original business needs and choose the most cost-effective solution
  • Strong Problem Solving and creative thinking skills
  • Ability to multi-task and manage multiple assignments simultaneously
  • Experience working with geographically distributed teams
Experience:
  • Bachelor s degree or better in Computer Science, Engineering or related discipline
  • 4+ years in-depth experience in administration and support of Enterprise Hadoop (Community, Cloudera or Hortonworks)
  • 4+ years of experience with distributed scalable Big Data store or NoSQL, including Accumulo, Cloudbase, HBase, or Big Table
  • 3+ years extensive experience in at least 3 of the following technologies: MapReduce, Spark, Hive, Sqoop and Pig
  • 5+ years of operating knowledge of various UNIX environments, specifically RedHat and CentOS.
  • 5+ years UNIX shell scripting experience.
  • Demonstrated experience working well with customers of varying levels of technical expertise in high-pressure situations and complex environments.
  • Experience in MapReduce programming with Apache Hadoop and Hadoop Distributed File System (HDFS) and with processing large data stores.
  • Experience with the design and development of multiple object-oriented systems
  • Knowledge of scripting languages (Python, Bash, Ruby, Perl) and at least one SQL variant.
  • Experience with using repository management solutions.
  • Experience with deploying applications in a Cloud environment.
,

Keyskills :
sqlstate requisitionphp maintenance scriptpig careerreporting

Software Engineer II (Big Data) Related Jobs

© 2019 Hireejobs All Rights Reserved