Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Hyderabad |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | IT - Software |
Functional Area | General / Other Software,DBA / Datawarehousing |
EmploymentType | Full-time |
Responsibilities Partner with data analyst, product owners and data scientists, to better understand requirements, solution designs, finding bottlenecks, resolutions, etc. Support/Enhance data pipelines and ETL using heterogeneous sources Transform data using data mapping and data processing capabilities like Kafka, Spark, Spark SQL, HiveQL etc. Expands and grows data platform capabilities to solve new data problems and challenges Ability to dynamically adapt to conventional big-data frameworks and tools with the use-cases required by the project Work with major cloud (AWS/Azure/GCP) Hadoop clusters Requirements3 to 5 years of experience in Analytical Projects involving Data Lake, Data Warehouse, Big Data Solutions, Cloud BI solutions at major systems integrator 3 + years of experience with the Hadoop ecosystem and Big Data technologies Knowledge of design strategies for developing scalable, resilient, always-on data lake Hands-on experience with the Hadoop eco-system - HDFS, MapReduce, HBase, Hive, Impala, Spark, Kafka Experience in implementing Hadoop Data Lakes - Data storage, partitioning, splitting, file types (Parquet, Avro, ORC) for specific use cases etc. Experience with one of the Query languages SQL, Hive, Impala, Drill etc. Exposure to one of NoSQL databases HBase, MongoDB, Cassandra etc. Experience in agile(scrum) development methodology Exposure to Data ingestion frameworks such as Kafka, Sqoop, Storm, Nifi, Spring Cloud, etc. Experience with development/automation skills. Must be very comfortable with reading and writing Scala, Python or Java code Desired Experience with one of the Hadoop open source distributions Apache, MapR and Cloudera Major cloud (AWS/Azure/GCP) Hadoop cluster experienceJob Segment: Database, SQL, Developer, Java, Data Warehouse, Technology ,
Keyskills :
sqoopjavahdfsresearchreportingavrocustomer relationsuse caseslakeopen sourcebig datasqlbasisaccountshivedata processingdata analysisdata mappingdata solutionsetl