Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Pune |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | Banking / Financial Services |
Functional Area | DBA / Datawarehousing,General / Other Software |
EmploymentType | Full-time |
* Duties and Responsibilities - Duties and Responsibilities: Key Role Responsibilities 1. Full stack Engineering capability for @scale end to end delivery. 2. Develop | Build | Manage | Evolve data pipelines for various data engineering projects. 3. The role will be 90% hands on role Individual Contributor 4. End to end management of Data Flow which included extraction Loading and transforming to SQL NoSQL Streaming Infrastructure and Object Stores. 5. Cross functional engagements including support functions like ML/AI EDW Bigdata IT COE & Business etc. 6. Participate in internal meetings/ cross functional - IT Data & Data Science Business Projects Basic Qualifications (Mandatory) 1. Minimum 5 to 6 years on managing and working on Big data Techonologies like Spark ADLS Azure Data Factory. 2. Deep understanding about Data-warehousing and Cloud ETL/ELT tools. 3. Hands on experience on the following areas a. Oracle database (11g /12c /18c/19c) b. Should have knowledge on any one of the following languages Scala Python or Java. c. Deep knowledge on Spark shells (Scala Pyspark Spark SQL Spark R) d. Working with cloud platforms (MS Azure AWS OCI or GCP) e. Microsoft Azure Stack (EDW Data Lake Data Factory Databricks Synapse Cosmos etc.) f. Big Data Ecosystem (such as Spark Hive) g. Production scale Database (like Oracle MS SQL Server MySQL PostgreSQL) h. NoSQL (like CosmosDB HBase MongoDB Cassandra etc.) i. Data Warehousing (such as Microsoft Azure DW Redshift Teradata etc.) j. Data Lake (Azure AWS OCI or GCP.) Other Important Aspects (Preferred) 1. Self-driven & high ownership levels are a must 2. Hands-on nature is must with ability to technical mentoring 3. Strong verbal written effective articulation are a must 4. Ability to work effectively across cross-functional teams 5. Strong team working and working with grounded collaborations (inter and intra teams) 6. Exceptional execution management with proactive program management (without follow-ups) 7. Ability to work on complex business requirements and effective delivery at scale 8. Constantly ahead of curve in latest Technology / Data Science related technologies (relevant to BFL) Educational Qualifications: 1. Educational background Bachelors | BE | Masters (Computer Science Mathematics Statistics Electronics Communications) Capability must be referenceable for reference check if required , * Duties and Responsibilities: Key Role Responsibilities 1. Full stack Engineering capability for @scale end to end delivery. 2. Develop | Build | Manage | Evolve data pipelines for various data engineering projects. 3. The role will be 90% hands on role Individual Contributor 4. End to end management of Data Flow which included extraction, Loading and transforming to SQL, NoSQL, Streaming Infrastructure and Object Stores. 5. Cross functional engagements including support functions like ML/AI, EDW, Bigdata, IT, COE & Business etc. 6. Participate in internal meetings/ cross functional - IT, Data & Data Science, Business Projects Basic Qualifications (Mandatory) 1. Minimum 5 to 6 years on managing and working on Big data Techonologies like Spark, ADLS, Azure Data Factory. 2. Deep understanding about Data-warehousing and Cloud ETL/ELT tools. 3. Hands on experience on the following areas a. Oracle database (11g /12c /18c/19c) b. Should have knowledge on any one of the following languages Scala, Python or Java. c. Deep knowledge on Spark shells (Scala, Pyspark, Spark SQL, Spark R) d. Working with cloud platforms (MS Azure, AWS, OCI or GCP) e. Microsoft Azure Stack (EDW, Data Lake, Data Factory, Databricks, Synapse, Cosmos, etc.) f. Big Data Ecosystem (such as Spark, Hive) g. Production scale Database (like Oracle, MS SQL Server, MySQL, PostgreSQL) h. NoSQL (like CosmosDB, HBase, MongoDB, Cassandra etc.) i. Data Warehousing (such as Microsoft Azure DW, Redshift, Teradata, etc.) j. Data Lake (Azure, AWS, OCI or GCP.) Other Important Aspects (Preferred) 1. Self-driven & high ownership levels are a must 2. Hands-on nature is must with ability to technical mentoring 3. Strong verbal, written, effective articulation are a must 4. Ability to work effectively across cross-functional teams 5. Strong team working and working with grounded collaborations (inter and intra teams) 6. Exceptional execution management with proactive program managemen
Keyskills :
javasql serverframeworkms sql serversource system analysisms sqlbig datadata flowdata sciencemicrosoft azureoracle databasedata warehousingdata engineeringwork effectivelysql