hireejobs
Hyderabad Jobs
Banglore Jobs
Chennai Jobs
Delhi Jobs
Ahmedabad Jobs
Mumbai Jobs
Pune Jobs
Vijayawada Jobs
Gurgaon Jobs
Noida Jobs
Oil & Gas Jobs
Banking Jobs
Construction Jobs
Top Management Jobs
IT - Software Jobs
Medical Healthcare Jobs
Purchase / Logistics Jobs
Sales
Ajax Jobs
Designing Jobs
ASP .NET Jobs
Java Jobs
MySQL Jobs
Sap hr Jobs
Software Testing Jobs
Html Jobs
IT Jobs
Logistics Jobs
Customer Service Jobs
Airport Jobs
Banking Jobs
Driver Jobs
Part Time Jobs
Civil Engineering Jobs
Accountant Jobs
Safety Officer Jobs
Nursing Jobs
Civil Engineering Jobs
Hospitality Jobs
Part Time Jobs
Security Jobs
Finance Jobs
Marketing Jobs
Shipping Jobs
Real Estate Jobs
Telecom Jobs

Pyspark & Spark Scala_,

5.00 to 9.00 Years   Hyderabad   29 Dec, 2022
Job LocationHyderabad
EducationNot Mentioned
SalaryNot Disclosed
IndustryIT - Software
Functional AreaIT Operations / EDP / MIS
EmploymentTypeFull-time

Job Description

    Job descriptionWe are Hiring for pyspark & Spark ScalaLocation : Pan IndiaNotice period : 30 -45 days2. Job Summary : Experience in building data pipelines using spark (Scala) on Databricks Development experience in the Amazon Cloud Environment AWS (S3, EMR, Databricks, Amazon Redshift, Athena.) Experience in working with REST APIs Ability to perform data manipulations, load, extract from several sources of data into another schema. Ability to work with multiple file formats (JSON, xml, rdf, reports, etc.) and analyse data, if required for further processing Experience in DevOps is nice to have but3. Experience: 5to Experience 9yrs.4. Required Skills : Technical Skills : Apache Spark ,Databricks ,Scala ,Amazon Redshift Domain Skills, Spark ,Scala5. Domain Skills : Sales & Marketing. Digital Analytics , Data Management 3.6. Nice to have skills : Technical Skills : Spark7. Roles & Responsibilities : Strong knowledge on BigData, Hive, Sqoop, Spark, Good knowledge on Scala Operations support and enhancements (L2/L3 support) A minimum of 1 year experience in Application Maintenance and Support Knowledge /Experience working in distributed operations model Passionate about in Continuous Integration / Continuous Delivery (CI / CD) Service Management: Cater to incidents and service requests arising as part of the applications under operations Responsible for end to end lights on support for the applications Coordinate with onsite and offshore teams as necessary during project delivery, including daily connect call. Service Tracking: Ensure adherence to SOW requirements including client security and compliance needs Follow up with internal and external stakeholders (Customer and Vendor liaison) to progress tickets to resolution Ensure adherence to defined processes (like creating problem records, performing timely RCAs, creating knowledge articles, maintaining application documentation etc.) Prepare performance dash boards & management reports Ensure schedule adherence for release requests and notify stakeholders in case of deviations. Cater to incidents and service requests arising as part of the applications under operations Responsible for end to end lights on support for the applications Coordinate with onsite and offshore teams as necessary during project delivery, including daily connect calls Ensure adherence to SOW requirements including client security and compliance needs Follow up with internal and external stakeholders (Customer and Vendor liaison) to progress tickets to resolution Ensure adherence to defined processes (like creating problem records, performing timely RCAs, creating knowledge articles, maintaining application documentation etc.) Prepare performance dash boards & management reports Ensure schedule adherence for release requests and notify stakeholders in case of deviations. Experience in building data pipelines using spark (Scala) on Databricks Development experience in the Amazon Cloud Environment AWS (S3, EMR, Databricks, Amazon Redshift, Athena.) Experience in working with REST APIs Ability to perform data manipulations, load, extract from several sources of data into another schema. Ability to work with multiple file formats (JSON, xml, rdf, reports, etc.) and analyse data, if required for further processing Experience in DevOps is nice to have but knowledge about DevOps is required Understanding of core AWS services, and basic AWS architecture best practices. Snowflake experience will be a plus. Ability to understand requirements and changes to requirementsData Modeling JD:Work with business stakeholders and subject matter export (SME) to understand business processes and translate them into data models. Develop and maintain conceptual and logical data models Ideally Life Sciences domain data modeling guidelines Document and maintain business glossary in the enterprise data catalog solution. Evaluate business data models and physical data models for variances and discrepancies Design and development experience of domain data model (ideally Life Sciences domain). Working knowledge of data modeling tools such as Erwin, ER/Studio, etc., data cataloging tool

Keyskills :
sparkscala

Pyspark & Spark Scala_, Related Jobs

© 2019 Hireejobs All Rights Reserved