Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Bangalore, Chennai, Hyderabad, Kolkata |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | NBFC ( Non Banking Financial Services ) |
Functional Area | General / Other Software |
EmploymentType | Full-time |
*Responsibilities:Experience building on AWS using S3, EC2, Aurora, EMR, Lambda, Step functions, etc preferred.Experience with Hive, Pyspark and Python are preferred.Good analytical skills with excellent knowledge of SQL.Experience using software version control tools (Git)AWS certifications or other related professional technical certifications3+ years experience in Big Data stack environments (EMR, Hadoop, MapReduce, Hive)3+ years of work experience with very large data warehousing environment1+ years of experience data modelling concepts3+ years of Python and/or Java development experience2+ years of experience in Test Driven Development for Pyspark codeFlexible and proactive/self-motivated working style with strong personal ownership of problem resolution.Excellent communicator (written and verbal formal and informal).Ability to multi-task under pressure and work independently with minimal supervision.Must be a team player and enjoy working in a cooperative and collaborative team environment.Adaptable to new technologies and standards.Experience working with other engineers in defining data engineering best practices and leveraging software development life cycle best practices such as agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations.To qualify for the role, you must haveBE/BTech/MCAMinimum 3 years hand-on experience in one or more key areas.5 to 10 years industry experience, *Responsibilities:Experience building on AWS using S3, EC2, Aurora, EMR, Lambda, Step functions, etc preferred.Experience with Hive, Pyspark and Python are preferred.Good analytical skills with excellent knowledge of SQL.Experience using software version control tools (Git)AWS certifications or other related professional technical certifications3+ years experience in Big Data stack environments (EMR, Hadoop, MapReduce, Hive)3+ years of work experience with very large data warehousing environment1+ years of experience data modelling concepts3+ years of Python and/or Java development experience2+ years of experience in Test Driven Development for Pyspark codeFlexible and proactive/self-motivated working style with strong personal ownership of problem resolution.Excellent communicator (written and verbal formal and informal).Ability to multi-task under pressure and work independently with minimal supervision.Must be a team player and enjoy working in a cooperative and collaborative team environment.Adaptable to new technologies and standards.Experience working with other engineers in defining data engineering best practices and leveraging software development life cycle best practices such as agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations.To qualify for the role, you must haveBE/BTech/MCAMinimum 3 years hand-on experience in one or more key areas.5 to 10 years industry experience
Keyskills :
javasqljavascriptsql serverjquerydata modelingversion controldata warehousinganalytical skillsagile methodologiessoftware developmentaws