Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Pune |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | IT - Software |
Functional Area | General / Other Software |
EmploymentType | Full-time |
IntroductionRidecell is seeking a highly skilled data engineer with a proven track record of building and scalingstate-of-the-art data systems. The role is for a lead position in the Analytics and Data Science group tobuild the foundation of data architecture.ResponsibilitiesOwn the security of the dataBuild and enhance the building blocks of Ridecell realtime analytics platformInfuse the DNA of data oriented thinking across the company rank and fileData modeling, ETL setup, Hadoop cluster scaling and reporting tool integration such as TableauBuild a company data warehouse search platform for business analyticsCreate ad-hoc queries and reports and educate others to create queries as neededAutomate and document processes, improve performance of bottlenecksDesign and publish custom dashboards for Product Teams and stakeholders around thecompanyCollaborate with Data Science, Product and Support Engineering teams to build new solutionsRequiredMS degree in Computer Science, Mathematics or Data Science, with at least 8 years workexperienceProblem solver with excellent written communication skills (we prefer well-written documents topowerpoints presentations)Intimate knowledge of SQL, particularly PostgresDBPractical programming experience in at least one programming languageStrong experience in working with ultra large data setsExperience building ETL with open source tools such as talend, PentahoVery good experience in UNIX/ LinuxCapable of planning and executing on both short-term and long-term goals individually and withthe teamAbility to establish process and bring in solutions with structured, flexible, and scalableframeworks and solutionsPreferredExperience working in R, MatlabExperience designing data storage of structures such has JSON (BSON), XML, Avro, ParquetExperience with AWS tools & technologies (S3, EMR, Kinesis etc), GCP toolsExperience with Geospatial queries, pivot tablesExperience with streaming data pipelines such as Kafka, AWS Kinesis, Spark streaming etc.Experience with demand planning for future data warehouse needsIntimate knowledge of Statistics and/ or Machine Learning Familiarity with columnar data storesFamiliarity with Python Django is a plus,
Keyskills :
sqljavadatawarehousingpythonopensourcedatasciencedatamodelingdemandplanningcomputersciencemachinelearningcommercialmodelsmaticapolymerasechainreactionpcrreptingtoolcommunicationsk