Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Mumbai City |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | Recruitment Services |
Functional Area | General / Other Software |
EmploymentType | Full-time |
Client of Forstaffing Python+ Hadoop Developer Forstaffing Python+ Hadoop Developer September 21, 2016 Filed under: Published: September 21, 2016 Description Designation:Python + Hadoop DeveloperExperience required - 5 -7 yearsReporting To:ArchitectVacancy:1Cab -NoWorking Days:5/ 6 (alternate Saturday off)Shift:Day shift jobJob Location MumbaiPosition type: permanentSalary - upto 13L/ annum (international travel @client site will be there)Skills : Python, Java, Hadoop, Git, Amazon Web Services, Celery, ETLWe are looking for a capable DevOps Engineer with a strong background in Big Data Technologies and Python.Technology Stack for ApplicationData Storage + Analytics: AWS/ Cloudera (on -Premise) Hadoop Ecosystem with MongoDB Elastic Search on S3 or on PremiseQueueing System: RabbitMQProgramming: PythonFront -End: HTML5 for WebApp and Objective C for ios. Potentially hybrid framework for iOS and AndroidCDN: AWS or On Premise RoutingResponsibilities- Responsible for the daily operation of data system such as hadoop / spark / olap, and ensure that the data platform is stable and running efficiently.- Responsible for application deployment, maintenance, administration and monitoring systems- Responsible for Automation of build, deployment and operational tasks throughout the Software Development Life Cycle (SDLC).- Bring a DevOps mind -set to working with teams including Release Management, Operations, Production Engineering, and Engineering to ensure end -to -end solutions are designed and implemented to make build, delivery and monitoring mechanisms reliable, transparent, measurable, scalable and transportable.- Responsible for data platform capacity planning, performance optimization, architecture audit and other works- Responsible for development work of automation, maintenance and monitoring- Develop and demonstrate detailed, independent ownership for supported systems, including configurations, monitoring and documentation.- Troubleshoot and resolve functionality and performance issues the application stack, from hardware, operating system, network and security to address issues that impact release and service delivery.- Follow best engineering practices such as architectural design, unit and regression testing, test -driven development, and continuous integration frameworks- Work with the infrastructure team to ensure that all the required monitoring, exception handling and fault tolerance is in place for a production quality platformQualifications:- Degree in computer background or related field; at least 5+ years of intensive Python development in a production environment- Experience with at least one other development or scripting language (Java, Node, Go, etc.) and good understanding of at least one programming language (C/ C++)- Knowledge and comprehension in the development of ETL processes and frameworks for large -scale, complex datasets- Experience with open source asynchronous task queues like Celery- Industry experience working on distributed, scalable, service -oriented platforms- Experience with multi -tiered system operations in high -volume, PCI -compliant transactional environments, including web, application, message queuing and database operational support and services.- Fundamental understanding of Network and Load Balancing technologies!- Experience building, implementing and supporting monitoring tools!- Have deep understanding of Cloudera/ AWS ecosystem- Proven ability to deliver high quality, production ready code- Strong understanding of the agile software development life cycle- Experience in the management, configuration, operation and maintenance of hadoop, spark, hive and data warehouse.- Provide expertise on reliability and performance challenges that youve conquered in the past- Thrive in a rapid development environment with an intense focus on quality- Have an allergic reaction to the words defer and works on my machine - Strong customer service and communication skills (verbal and written) to effectively interact with a diverse team of people across business and engineeringDrop files here Mumbai(Andheri), India September 22, 2016 Are you sure you want to delete this file,
Keyskills :
hive hadoop sqoop pig java softwaredevelopmentlifecycle amazonwebservices opensource webservices loadbalancing customerservice igdata lifecycle objectivec elasticsearch faulttolerance development