Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Gurugram |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | Media / Dotcom / Entertainment |
Functional Area | IT Operations / EDP / MIS |
EmploymentType | Full-time |
Understand & provide innovative solutions to business and product requirements using Big Data Architecture Take ownership of end-to-end data-pipeline including system design and integrating required Big Data tools & frameworks Implementing ETL processes and constructing data warehouse (HDFS, S3, etc.) at scale Developing highly performant Spark jobs for deriving data insights and building user preference Developing required querying and reporting tools for various business teams Requirements:Working knowledge of Linux systems and distributed systems is a must. Knowledge of scripting language like python/ scala Hands-on experience with writing Spark or Map-Reduce jobs and proficient understanding of distributed computing principles. Experience with integration of data from multiple data sources Experience with various messaging systems, such as Kafka Preferred:Understanding of Spark and HDFS internals. Experience with Lambda Architecture and building required infrastructure. Experience with Big Data querying tools, such as Hive, Pig, and Impala Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Experience with resource managers such as YARN, Mesos Experience with stream-processing systems such as Storm, Spark-Streaming, etc. Knowledge of Lucene, SOLR, Elastic Search or any other similar technology is a plus. Prior experience in developing segmentation and recommendation systems is recommended. Experience with reporting tools like Apache Zepplin. Excited to join our data engineering team Tell us more by dropping your resume at careers@roposo.com ,
Keyskills :
sql java datawarehousing python bigdata systemdesign elasticsearch dataengineering dataarchitecture distributedsystems productrequirements etl pig hive hdfs yarn solr linux nf matica rep tingtools