hireejobs
Hyderabad Jobs
Banglore Jobs
Chennai Jobs
Delhi Jobs
Ahmedabad Jobs
Mumbai Jobs
Pune Jobs
Vijayawada Jobs
Gurgaon Jobs
Noida Jobs
Oil & Gas Jobs
Banking Jobs
Construction Jobs
Top Management Jobs
IT - Software Jobs
Medical Healthcare Jobs
Purchase / Logistics Jobs
Sales
Ajax Jobs
Designing Jobs
ASP .NET Jobs
Java Jobs
MySQL Jobs
Sap hr Jobs
Software Testing Jobs
Html Jobs
IT Jobs
Logistics Jobs
Customer Service Jobs
Airport Jobs
Banking Jobs
Driver Jobs
Part Time Jobs
Civil Engineering Jobs
Accountant Jobs
Safety Officer Jobs
Nursing Jobs
Civil Engineering Jobs
Hospitality Jobs
Part Time Jobs
Security Jobs
Finance Jobs
Marketing Jobs
Shipping Jobs
Real Estate Jobs
Telecom Jobs

Data Engineer ETL with Spark

3.00 to 5.00 Years   Chennai   30 Aug, 2019
Job LocationChennai
EducationNot Mentioned
SalaryNot Disclosed
IndustryIT - Software
Functional AreaGeneral / Other Software
EmploymentTypeFull-time

Job Description

Job DescriptionWe need someone with 3- 5 years of extensive experience in Data Warehousing, ETL and Big data technologies(Hadoop, Hive, Sqoop.etc) and 2+ years of mandatory experience in Spark with Python/ Scala with more than one end- to- end implementation experience.Roles and ResponsibilitiesTo develop Scala or Python scripts, UDFs using both Data frames/ SQL/ Data sets and RDD in Spark 2.3+ for Data Aggregation, queries and writing data back into the OLTP system through Sqoop.Should have a very good understanding of Partitions, Bucketing concepts and designed both Managed and external tables, ORC files in Hive to optimize performance.Wrote and Implemented Spark and Scala scripts to load data from and to store data into Cassandra/ Hbase/ any NoSQLImplementing SCD Type 1 and Type 2 model using SparkDeveloped Oozie workflow for scheduling and orchestrating the ETL processExperienced in performance tuning of Spark Applications for setting right Batch Interval time, the correct level of Parallelism and memory tuningStreaming data into Elastic search for visualization using KibanaShould have implemented the mapping parameters/ variables in the mapping and the session level to increase the reusability of the code and parameterize the hardcoded values. Additional skills:Knowledge in AWS stacks AWS Glue, S3, SQSExposure to Elastic Search, Solr is a plusExposure to NoSQL Databases Cassandra, MongoDBExposure to Serverless computing ChennaiFull timeIT,

Keyskills :
sqljavadatawarehousingpythonbigdataelasticsearchdataaggregationetlawshivesolroltpglueooziesparkscalamaticaperfmancetuning

Data Engineer ETL with Spark Related Jobs

© 2019 Hireejobs All Rights Reserved