Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Pune |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | Recruitment Services |
Functional Area | DBA / DatawarehousingIT Operations / EDP / MIS |
EmploymentType | Full-time |
Job Details Highlight Your Profile by Sharing this Job Location: pune State: pune PostalCode: 411038Desired Skills: Proficient understanding of distributed computing principles Management of Hadoop cluster , with all included services {{unless you are going to have specific Big Data DevOps roles for this}} Ability to solve any ongoing issues with operating the cluster {{unless you are going to have specific Big Data DevOps roles for this}} Proficiency with Hadoop v2 , MapReduce , HDFS Experience with building stream - processing systems , using solutions such as Storm or Spark - Streaming {{if stream - processing is relevant for the role}} Good knowledge of Big Data querying tools , such as Pig , Hive , and Impala Experience with Spark {{if you are including or planning to include it}} Experience with integration of data from multiple data sources Experience with NoSQL databases , such as HBase , Cassandra , MongoDB Knowledge of various ETL techniques and frameworks , such as Flume Experience with various messaging systems , such as Kafka or RabbitMQ Experience with Big Data ML toolkits , such as Mahout , SparkML , or H2O {{if you are going to integrate Machine Learning in your Big Data infrastructure}} Good understanding of Lambda Architecture , along with its advantages and drawbacks Experience with Cloudera / MapR / Hortonwork Responsibilities: We are looking for a Data Engineer that will work on collecting , storing , processing , and analyzing huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes , then maintaining , implementing , and monitoring them. You will also be responsible for integrating them with the architecture used across the company. Experience Requirements: Selecting and integrating any Data tools and frameworks required to provide requestedcapabilities Implementing the ETL process Monitoring performance and advising any necessary infrastructure changes Defining data retention policies Industry: IT Salary Range: As Per Industry Standards.,
Keyskills :
sql ml hive nosql pig java spark informatica etl python igdata machinelearning processmonitoring monitoringperformance marketingsupport datawarehousing dataretention buildingenvelope