Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Pune |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | IT - Software |
Functional Area | General / Other Software |
EmploymentType | Full-time |
Must have implemented On-prem/cloud-native Data Lake solutions, Azure is preferred Must have knowledge on building data model, ETL pipelines, and Data governance methods/models Must have worked on Azure based Data Lake implementations using services such as ADLS, Blob Storage, ADF, and Azure Databricks Must be able to convert business requirements into technical requirements and determine best suited technologies to implement it. Must be able to create clean and precise design documents, architecture diagrams, and ETL workflow design Must have experience in client communication Must lead a team of Big Data Engineers and drive the project to its completion Proficient in Spark (Python/Scala), Hands-on with Databricks Spark certified preferred, Proficient with the latest Big data tools/technologies like Hive, Hadoop, Yarn, Nifi, Kafka, Spark Streaming Proficiency in any of the programming language: Scala, Python or Java Good to have knowledge on working with Presto Good to have experience in streaming data processing using Azure IoT, Kafka, and Spark streaming Good to have experience in Apache Nifi Skills required for hiring Total Experience: 10+ years Location: Pune Key Responsibilities: Must have implemented On-prem/cloud-native Data Lake solutions, Azure is preferred Must have knowledge on building data model, ETL pipelines, and Data governance methods/models Must have worked on Azure based Data Lake implementations using services such as ADLS, Blob Storage, ADF, and Azure Databricks Must be able to convert business requirements into technical requirements and determine best suited technologies to implementat it Must be able to create clean and precise design documents, architecture diagrams, and ETL workflow design Must have experience in client communication Must lead a team of Big Data Engineers and drive the project to its completion Proficient in Spark (Python/Scala), Hands-on with Databricks Spark certified preferred, Proficient with the latest Big data tools/technologies like Hive, Hadoop, Yarn, Nifi, Kafka, Spark Streaming Proficiency in any of the programming language: Scala, Python or Java Good to have knowledge on working with Presto Good to have experience in streaming data processing using Azure IoT, Kafka, and Spark streaming Good to have experience in Apache NifiRoles and ResponsibilitiesDesired Candidate ProfilePerks and Benefits Arrtactive ctc with additional benefits,
Keyskills :
etlinformaticadata modelingerwinunixbig datadata processingdata governancebehavioral trainingclient communicationbusiness requirementstechnical requirementsadfiotctcjavahiveyarnlake