Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Gurugram |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | Logistics / Courier / Transportation |
Functional Area | General / Other Software |
EmploymentType | Full-time |
Senior Data Developer Apply DescriptionLooking for the big data engineer to help shape our technology and product roadmap. You will be part of the fast - paced , entrepreneurial team that enables Big Data and batch / real - time analytical solutions leveraging transformational technologies (Python , Scala , Java , Hadoop , MapReduce , Kafka , Hive , HBase , Spark , Storm , etc.) to deliver innovative solutions.As a data developer at Delhivery you are expected to work closely with Product team to do rapid development of new features / enhancements , work with the Data Services team to design and build data stores , write effective MapReduce / aggregations , write quality code and work with your lead and team to drive quality and efficiency.ResponsibilityYou will be responsible for delivering high - value next - generation products on aggressive deadlines and will be required to write high - quality , highly optimized / high - performance and maintainable codeManage ETL / ELT pipelines of various MicroservicesWork on distributed / big - data system to build , release and maintain an Always On scalable data processing and reporting platformWork on relational and NoSQL databasesBuild scalable architectures for data storage , transformation and analysisAbility to work in the fast - paced and dynamic environmentQualificationsOverall 3+ years of IT experience in a variety of industries , which includes hands on experience in Big Data Analytics and development2+ yrs of experience in writing pyspark for data transformation.2+ years of experience with detailed knowledge of data warehouse technical architectures , ETL / ELT , reporting / analytic tools , and data security2+ years of experience in designing data warehouse solutions and integrating technical components2+ yrs of experience leading data warehousing and analytics projects , including using AWS technologies - Redshift , S3 , EC2 , Data - pipeline and other big data technologies1+ yr of experience of BI implementation in the Cloud.Exposure in at least one ETL tool.Exposure to cloud Datawarehouse will be a plus.Exposure in at least one reporting tool like Redash / Tableau / similar will be a plus.Familiarity with Linux / Unix scriptingExpertise with the tools in Hadoop Ecosystem including Pig , Hive , HDFS MapReduce , Sqoop , Storm , Spark , Kafka , Yarn , Oozie , and Zookeeper.Solid experience building APIs (REST) , Java services , or Docker MicroservicesExperience with data pipelines using Apache Kafka , Storm , Spark , AWS Lambda or similar technologiesExperience working with terabyte data sets using relational databases (RDBMS) and SQLExperience using Agile / Scrum methodologies to iterate quickly on product changes , developing user stories and working through backlogs.Experience with Hadoop , MPP DB platform , other NoSQL (Mongo , Cassandra) technologies will be a big plus,
Keyskills :
big data analyticsbig dataapache kafkadata servicesdata analyticsdata processingdata warehousingrelational databasesetlawspigjavahiveunixhdfsyarnlinuxuser stiesrepting tool