Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Bangalore |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | Recruitment Services |
Functional Area | General / Other Software |
EmploymentType | Full-time |
Experience in working on Hadoop Distribution (CDH/HDP/MapR). Hands-on experience with MapReduce, Hive 2.x*, Spark 2.x*. Well versed with Enterprise Core Java 8 Conceptual knowledge of Data Structures & Algorithms Possessing in-depth knowledge of various Design Patterns (Java/BigData), Data Processing Patterns (Batch/NRT/RT processing) & capable of providing design & architecture of typical business problems Knowledge and experience with NoSQL Database (Cassandra/HBase/MongoDB/CouchDB/Neo4j),SQL Database (MySQL/Oracle). Kafka, Redis, Distributed Message Queues alongwith Distributed Caching Proficient understanding of Build tools (Ant/Maven), Code Versioning tools (Git) with Continuous Integration Exposure and awareness of complete PDLC/SDLC alongwith experience working in projects with Agile Scrum methodology Excellent communication, problem-solving & analytical skills with ability to thrive in a fast paced, dynamic environment & operate under stringent deadlines Confident, highly motivated and passionate about delivery and customer satisfaction Strong technical development experience with writing performant code leveraging best coding practices Out of box thinker and not just limited to work done in existing assignment(s) Good to Have : Knowledge/experience working on Search Platforms (Solr/ElasticSearch), designing as well as implementing RESTful APIs Experience with Cloud environments (AWS/GCP/Azure), exposure to Containers & Container Management Platforms (Dockers/Kubernetes) Understanding of Data Lake vs Data Warehousing concept alongwith the ability to perform comparative analysis of Data Stores and knowledge/experience with creation & maintenance of the same Experience with Big Data ML toolkits (SparkML/Mahout) Knowledge on Data Privacy, Data Governance, Data Compliance & Security Programming experience with Python/Scala Experience with building & maintaining optimal data pipelines in a reliable manner so as to deliver solutions on the fly Experience working on open source products What You will Do Design and implement solutions for problems arising out of large-scale data processing Provide the team technical direction(s)/approach(es) to be undertaken and guide them in resolution of queries/issues etc. Attend/drive various architectural, design and status calls with multiple stakeholders Ensure end-to-end ownership of all tasks being aligned Design, build & maintain efficient, reusable & reliable code skillsJava qualificationB.E/B.Tech,
Keyskills :
big datacore javaopen sourcebuild toolssql databasedata privacydata processingdata structuresproblem solvingdata governancedesign patterns