Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Hyderabad |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | IT - Software |
Functional Area | General / Other Software |
EmploymentType | Full-time |
Works independently under limited supervision and applies knowledge of subject matter in Applications Development. Possess sufficient knowledge and skills to effectively deal with issues, challenges within field of specialization to develop simple applications solutions. Second level professional with direct impact on results and outcome.Your future duties and responsibilities: With 4 years of overall experience in data analysis, data modeling and implementation of enterprise class systems spanning Big Data, Data Integration, Object Oriented programming and Advanced Analytics. Excellent understanding of Hadoop Architecture and Daemons such as HDFS, Name Node, Data Node, Job Tracker, Task Tracker. Extracted files from Cassandra through Sqoop and placed in HDFS and processed. Expertise in Big Data Technologies and Hadoop Ecosystem tools like Flume, Sqoop, Hbase, ZooKeeper, Oozie, MapReduce, Hive, PIG and YARN. Extracted and updated the data into MONGOD using MONGO import and export command line utility interface. In- depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, Spark MLlib. Expertise in developing Real- Time Streaming Solutions using Spark Streaming Will be responsible for the development of technical solutions utilizing the big data platform.Extracted and updated the data into MONGOD using MONGO import and export command line utility interface. Importing and exporting data from RDBMS to HDFS . Hive tables and HBase by using Sqoop. Knowledge in handling Kafka cluster and created several topologies to support real- time processing requirements.Required qualifications to be successful in this role: Primary Skill: Work experience: 4- 8 yearsTechnology Stack: Hadoop Horton works, Hive, Spark, Informatica, Python, Bash (shell scripting), Language R (nice to have), Kafka (nice to have).Experiences:4- 8 yrsSkills: Hadoop Hive Java Unix Perl SQL/ PL SQL,
Keyskills :
sqlcgipigjavahiveperlunixyarnbashnicesqoopflumedfsplsql