hireejobs
Hyderabad Jobs
Banglore Jobs
Chennai Jobs
Delhi Jobs
Ahmedabad Jobs
Mumbai Jobs
Pune Jobs
Vijayawada Jobs
Gurgaon Jobs
Noida Jobs
Oil & Gas Jobs
Banking Jobs
Construction Jobs
Top Management Jobs
IT - Software Jobs
Medical Healthcare Jobs
Purchase / Logistics Jobs
Sales
Ajax Jobs
Designing Jobs
ASP .NET Jobs
Java Jobs
MySQL Jobs
Sap hr Jobs
Software Testing Jobs
Html Jobs
IT Jobs
Logistics Jobs
Customer Service Jobs
Airport Jobs
Banking Jobs
Driver Jobs
Part Time Jobs
Civil Engineering Jobs
Accountant Jobs
Safety Officer Jobs
Nursing Jobs
Civil Engineering Jobs
Hospitality Jobs
Part Time Jobs
Security Jobs
Finance Jobs
Marketing Jobs
Shipping Jobs
Real Estate Jobs
Telecom Jobs

Lead Analyst

10.00 to 14.00 Years   Chennai   29 Feb, 2020
Job LocationChennai
EducationNot Mentioned
SalaryNot Disclosed
IndustryBanking / Financial Services
Functional AreaStatistics / Analytics
EmploymentTypeFull-time

Job Description

Process Overview: Global Banking Markets division serves mid- to large-sized corporations and institutional clients worldwide. It is comprised of Business Banking, Global Commercial Banking, Global Corporate Investment Banking, Global Markets and Wholesale Credit. Aligned with these client-facing groups are Global Capital Markets and Global Research. The Shared Technology Platforms is a portfolio under the GBAMT Strategy, Architecture and Core Platforms portfolio. The portfolio is responsible for designing, building and maintaining high performing software systems that are used by the Global Banking and Markets Technology employees globally. These are Technology for Technology tools that cater to varying project management needs including but not limited to forecasting, hiring, resource lifecycle management etc.Job Description A Big Data Engineer who will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. The person will be responsible for integrating them with the architecture used across the business functions. In addition to this, candidate effectively having experience on training models in ML and data science, deploy the models into production. He She should be capable of programmatically integrate the models with various pipelines.Responsibilities

  • Help to guide platform architecture, ensuring flexibility and scalability
  • Be an internal reference point for engineering best practices
  • Integrating any Big Data tools and frameworks required to provide requested capabilities
  • Implementing ETL process using Pig, Spark, Python, and so on (any relevant open source tools)
  • Building a data pipeline for business team
  • Monitoring performance and advising any necessary infrastructure changes as required
  • Write clean code that is ready to be deployed, scaled, and maintained
  • Should be an expert in Scala and Java, and also be comfortable with functional programming principles. Also be ready to swift through and evaluate new technologies using personal experience to evaluate technology tradeoffs that impact the business every day
  • Build a ML models for parsing the structured unstructured documents,
  • Leverage Deep learning capabilities wherever appropriate
  • Integrates python models along with several python pipelines including prod deployments
Requirements
  • Education: B.E. B. Tech MBA PGDBM
  • Certifications If Any: CFA, FRM
  • Experience Range: 10 - 14 yrs
  • Mandatory Skill:
    • Minimum 4+ years of experience in Hadoop distribution platform development and implementation into production
    • Having strong experience in Software Engineering, Hive, Hbase, Impala, Scala, Spark, PySpark Hadoop, AWS, LinuxUnix, Bash, Data Processing Transformation hands on experience
    • Proficient understanding of distributed computing principles
    • Management of Hadoop cluster, with all included services preferably Cloudera distribution
    • Ability to solve any ongoing issues with operating the cluster
    • Proficiency with Hadoop v2, MapReduce, HDFS
    • Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
    • Experience with integration of data from multiple data sources
    • Proactive, autonomous, fast learner, team worker
    • Knowledge of various ETL techniques and frameworks, such as Flume, NiFi
    • Experience with various messaging systems, such as Kafka
    • Experience with Big Data ML toolkits, such as Tenserflow, SparkML, or H2O
    • Good understanding of Lambda Architecture, along with its advantages and drawbacks
    • Industry experience in predictive modeling, data science and quantitative analysis
    • Exposure towards Neural networks and basic experience on constructing LSTM or any other recurrent neural networks
    • Very good exposure on python packages such as Spacy, Numpy, Pandas , re, Flask Django
    • Very good communication skills
    Desired skills
    • Experience with ETL pipeline tools like Airflow, and with code version control systems like Git
    • Automation in data pipeline
    • Experience in Configuration management and deployment tools
    • Should have experience in working on Agile based projects
,

Keyskills :
sqlcommercial models data processingmentoring capital marketsinvestment banking commercial bankingreporting

Lead Analyst Related Jobs

© 2019 Hireejobs All Rights Reserved