hireejobs
Hyderabad Jobs
Banglore Jobs
Chennai Jobs
Delhi Jobs
Ahmedabad Jobs
Mumbai Jobs
Pune Jobs
Vijayawada Jobs
Gurgaon Jobs
Noida Jobs
Oil & Gas Jobs
Banking Jobs
Construction Jobs
Top Management Jobs
IT - Software Jobs
Medical Healthcare Jobs
Purchase / Logistics Jobs
Sales
Ajax Jobs
Designing Jobs
ASP .NET Jobs
Java Jobs
MySQL Jobs
Sap hr Jobs
Software Testing Jobs
Html Jobs
IT Jobs
Logistics Jobs
Customer Service Jobs
Airport Jobs
Banking Jobs
Driver Jobs
Part Time Jobs
Civil Engineering Jobs
Accountant Jobs
Safety Officer Jobs
Nursing Jobs
Civil Engineering Jobs
Hospitality Jobs
Part Time Jobs
Security Jobs
Finance Jobs
Marketing Jobs
Shipping Jobs
Real Estate Jobs
Telecom Jobs

Data Architect II (Data Pipelines & DataOps )

4.00 to 9.00 Years   Pune   29 Sep, 2021
Job LocationPune
EducationNot Mentioned
SalaryNot Disclosed
IndustryMedical / Healthcare
Functional AreaGeneral / Other Software
EmploymentTypeFull-time

Job Description

DESCRIPTIONWe are building the Next generation of Healthcare diagnostics analytics products with a culture of data-driven insight and innovation. If you are ready to use your creativity and results-oriented critical thinking to meet complex challenges and develop new strategies for acquiring, analyzing, modelling and storing data, apply for our Data Architect (Data Pipelines & DataOps) opening.We are looking for someone to guide data engineering and QA teams. Utilize the latest technology and information management methodologies to meet our requirements for building efficient, secure, and resilient Data Pipelines and ETL workflow orchestrations.LOCATIONPune, IndiaKEY RESPONSIBILITIES

  • The primary responsibility of this role is to architect,design, build, and maintain data pipelines that will provision high quality data ready for analysis. This includes internal enterprise data assets and external second- and third-party data. The architect should be a creative problem solver able to identify new opportunities that can be rapidly prototyped and evaluated working closely with stakeholders.
  • Lead and Mentor Data Engineers: This role will be responsible for leading and developing a team of data engineers focused on the growth in the teams skills and ability to execute as a team using DevOps and Data Ops principles. Also, collaborate with Data Engineering and Governance teams in building Data Lake, EDW by ingesting data from various sources to enable the advanced analytical capabilities.
  • Analyze data to unlock insights: Move beyond descriptive reporting helping stakeholders identify relevant insights and actions from data.
  • Drive Automation through effective metadata management: Design or adopt solutions and frameworks using innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity.
REQUIRED EXPERIENCE, SKILLS & QUALIFICATIONSTo succeed in this role, you should have the following skills and experience
  • Graduate or post graduate in Computer science/Software engineering
  • At least seven years or more of work experience in Data Ingestion & Management related to Big-Data, EDW, analytical or business intelligence disciplines including data analysis, visualization, integration, modeling, etc.
  • At least four years of experience working in Data Lake Foundations/platforms, cross-functional teams and collaborating with business stakeholders.
  • Expert level knowledge in two or more of the following subject areas specialisms.
    • Data Engineering patterns and practices for efficient & optimized utilization from raw data
    • Data Warehousing, semantic layer definitions and scaled data consumption patterns.
    • Distributed compute and processing data in parallel.
    • Robust enterprise grade data integration, ingestion, management & pipelines.
    • Data streaming and associated Lambda & Kappa style data architectures.
  • Strong experience with advanced analytics tools for Object-oriented/object function scripting using languages such as R, Python, Spark, Scala, or similar.
  • Strong experience in AWS Cloud platform and services like S3, Cloud Watch, Cloud Trail, SNS, API Gateway, Glue, Lambda, Kinesis, Athena, Airflow and EMR
  • Strong experience with popular database programming languages including SQL, PL/SQL, etc. for relational databases like RedShift, Aurora and on NoSQL/Hadoop oriented databases like MongoDB, Cassandra, etc for non relational databases.
  • Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies. These should include ETL/ELT, data replication/CDC, message-oriented data movement and upcoming data ingestion and integration technologies such as stream data integration and data virtualization.
  • Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production.
  • Experience working with popular data discovery, analytics, and BI software tools like MicroStrategy, Tableau, Qlik, Power BI and others for semantic-layer-based data discovery. Certification in one more of these tools would be a plus.
  • Basic understanding of analytical methods including regression, forecasting, time series, cluster analysis, classification, etc. Experience with machine learning and AI would be a plus.
  • Basic understanding of popular open-source and commercial data science platforms such as Python, R, KNIME, Alteryx, others is a strong plus.
  • Basic experience in working with data governance, data quality, and data security teams and specifically and privacy and security officers in moving data pipelines into production with appropriate data quality, governance and security standards and certification.
  • Demonstrated ability to work across multiple deployment environments including cloud, on-premises and hybrid, multiple operating systems and through containerization techniques such as Docker, Kubernetes, AWS Elastic Container Service and others.
EDUCATIONBachelor s or Master s in in Computer science/IT/Computer Applications/Software engineering,

Keyskills :
etlinformaticadata modelingerwinunixpower bitime seriesdata sciencedata qualitydata analysisdata securitydata governanceenterprise datamachine learning

Data Architect II (Data Pipelines & DataOps ) Related Jobs

© 2019 Hireejobs All Rights Reserved