Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Pune |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | IT - Software |
Functional Area | General / Other Software |
EmploymentType | Full-time |
1. Designing, building and operationalizing large-scale enterprise data solutions and applications using one or more of GCP data and analytics services in combination with 3rd parties - Spark, Hive, CloudData Proc, Cloud Dataflow, Apache Beam/ composer, Big Table, Cloud BigQuery, Cloud PubSub, Cloud storage Cloud Functions & Github. 2. Designing and building data pipelines from data ingestion to consumption within a hybrid big data architecture, using Cloud Native GCP, DBT, SQL etc. 3. Experience with Data lake, data warehouse ETL build and design 4. Experience in data analysis, data modeling and profiling 5. Strong Experience in programming languages like SQL, Python/Java 6. Experience in an agile development environment SCRUM 7. Experience scheduling/automating scripts 8. Experience on Linux command line and Bash scripting 9. Good experience of parsing data formats such as XML/JSON and using 3rd party API s 10. Official Google Data Engineer Certification is beneficial. 11. Ability to scope a project based on a technical brief and work with the DevOps and QA teams to provide a detailed project plan including: 12. Data Flow Diagrams for process flow 13. Database Schemas & Normalisation 14. Scalable environment architecture suggestions 15. Experience scheduling/automating scripts 16. Experience with streaming data beneficial 17. Experience on Linux command line and Bash scripting 18. Experience with Git/GitHub 19. Experience with Dataflow, Google PubSub or other queuing software beneficial 20. Good experience of parsing data formats such as XML/JSON and using 3rd party API sJob Segment: Cloud, Developer, Quality Assurance, Database, SQL, Technology ,
Keyskills :
sapenvironmentdeliverycustomer relationssalesbig datadata flowdata analysisdata modelingcloud storageflow diagramsdata solutionsenterprise dataquality assuranceagile developmentdata architecture