Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Pune |
Education | Not Mentioned |
Salary | Rs 10 - 20 Lakh/Yr |
Industry | Banking / Financial Services |
Functional Area | General / Other Software |
EmploymentType | Full-time |
Roles and responsibilities Big data engineers, who have computer engineering or computer science degrees, need to know basics of algorithms and data structures, distributed computing, Hadoop cluster management, HDFS, MapReduce, stream-processing solutions such as Storm or Spark, big data querying tools such as Pig, Impala and Hive, data integration, NoSQL databases such as MongoDB, Cassandra, and HBase, frameworks such as Flume and ETL tools, messaging systems such as Kafka and RabbitMQ, and big data toolkits such as H2O, SparkML, and Mahout. Responsibilities Duties and Responsibilities :: Basic Qualifications: 1. Minimum 6+ years of industrial experience and at least 3+ years in Big Data stack (HDFS, Spark etc) 2. Strong techno-functional management working with business teams (on business use cases/ programs) 3. Hands on experience as a data lake, data warehouse, big data/analytics developer, Big Data Technical Lead Experience working within the analytics, software development or large data volume based is highly desired. 4. Strong Hands-on experience in the Hadoop, Spark, R|Py, Hive is mandatory. 5. Implementation experience in the Big Data Ecosystem, (such as Hadoop, Spark, R|Py, Hive), Database (such as Oracle, MS SQL Server, MySQL, PostgreSQL), NoSQL (such as HBase, MongoDB, Cassandra, Cosmos, Arango, Orient) and Data Warehousing (such as Microsoft Azure DW, Redshift, Teradata, Vertica) and data migration, ETL (AWS Glue, Azure Data Factory, Informatica, SSIS, etc.) and integration 6. Good architectural inclination and has participated/contributed @ core design levels with ability to manage /work on complex large data projects. 7. Strong IT & Business interactions capability 8. Management of Data Lake Team which includes data engineers, partners, IT integration 9. Strong team player with exceptional program management (execution). Self-driven & high ownership levels are key 10. Constantly ahead of curve in latest Data Management | Data Science / Data related technologies and capabilities that are relevant to BFL/ Business is a nice to have. 11. Ability to think understand complex business requirements and render them as prototype systems with quick turnaround time. Required Candidate profile1. Experience with Basic Data Science, Base level Statistics, Machine Learning and Modelling. 2. Working knowledge of modern software development practices and technologies such as agile methodologies and DevOps.,
Keyskills :
teammanagementcustomerrelationsdeliverydocumentationautomationmssqlserversourcesystemanalysismssqlbigdataetltoolssqlserverdatasciencedatamigrationdatamanagementdatastructuresicroso