Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Noida |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | IT - Software |
Functional Area | General / Other Software |
EmploymentType | Full-time |
Requirements and General Skills Have all round experience in developing and delivering large- scale business applications in scale- up systems as well as scale- out distributed systems. Responsible for design and development of application on different data platforms. Should implement complex algorithms in a scalable fashion. Core data processing skills are highly important with tools like HIVE/ Impala Should be able to write MapReduce jobs or Spark jobs for implementation. Ability to write Java- based middle layer orchestration between various components on Hadoop/ spark stack. Work closely with product and Analytic managers, user interaction designers, and other software engineers to develop new offerings and improve existing ones. Personal Skills Good Analytical & Problem- Solving Skills Good Communication Skills: Refers to effective oral, written and presentation skills Responsible Team player with go- getter attitude Technical Skills B.Tech or Master s degree from a reputed university in Computer Science or equivalent disciplines. 1- 3 years experience building software or web applications with object- oriented or functional programming languages. Doesn t matter what language, just a focus on writing clean, well designed and scalable code on MapReduce. Experience with Big Data technologies such as Hadoop, Hive, Spark, or Storm Experience with streaming technologies like Kafka, Spark, Flink Experience with scalable systems, large- scale data processing, and ETL pipelines Experience with SQL and relational databases such as Postgres or MySQL Experience with NoSQL databases like DynamoDB, CloudSearch, or open source variants like Cassandra, HBase, Solr, or ElasticSearch Experience with DevOps tools (GitHub, Travis CI, JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development) Experience building and deploying applications on on- premise and AWS cloud- based infrastructure We are an equal opportunity employer and value diversity in our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.,
Keyskills :
sql java datawarehousing informatica python bigdata coredata opensource dataprocessing personalskills computerscience technicalskills userinteraction webapplications softwareengineers resentationskil