Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Ahmedabad |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | IT - Software |
Functional Area | IT Operations / EDP / MIS |
EmploymentType | Full-time |
Create and maintain optimal data pipeline architecture. Build the infrastructure required for optimal ETL of data from a variety of data sources using AWS technologies. Build large-scale batch and real-time data pipelines with data processing frameworks like Spark, Storm or other AWS technologies. Skills Required Good understanding on Java / Python. B.Tech / M.Tech in CS with major in Data Engineering/ Big Data. Understanding of Spark & other big data technologies. Added advantage if the candidate has experience in data modeling, ETL design, implementation and maintenance. Personality You want to work in a small, agile team. You mentor other developers when needed. You work hard and don t need much oversight. You like variety in your projects. You want to be proud of what you do at your job.,
Keyskills :
sql java datawarehousing python bigdata datamodeling dataprocessing commercialmodels projectadministration etl aws spark agile design pipeline pipelines infrastructure implementation nf matica realtimedata