Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Hyderabad |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | IT - Software |
Functional Area | General / Other Software |
EmploymentType | Full-time |
Job Summary Interested in Machine Learning , and empowering the world to do more and better machine Learning With Amazon SageMaker , Amazon Web Services (AWS) Machine Learning platform team is building customer - facing services to catalyze data scientists and software engineers in their machine learning endeavors. This product is a blend of HTTP APIs , low and high - level SDKs , and an AWS Console UI. You will design , implement , test , document , and support cross - cutting services to help customers do machine learning at scale. Youll assist in gathering and analyzing business and functional requirements , and translate requirements into technical specifications for robust , scalable , supportable solutions that work well within the overall system architecture. You will serve as a key technical resource in the full development cycle , from conception to delivery and maintenance. You will produce comprehensive , usable software documentation; recommend changes in development , maintenance and system standards. You will own delivery of entire piece of the system and serve as technical lead on complex projects using best practice engineering standards , and hire / mentor junior development engineers. Candidate should be a talented engineer that can show initiative , adaptability to challenging environment , problem solving skills , and understanding of Data Engineering. 3+ years industry experience is a must! Responsibilities Create capabilities and abstractions that can enable anyone (engineer or data scientist) to create a scalable ETL pipeline for whatever the purpose is: metrics , analysis , machine learning , dashboard visualizations Make intuitive decisions about what services , frameworks , and capabilities need to be in place before they are desperately needed. Build and maintain a data collection system that robustly extracts relevant data from multiple sources and data stores Proficiency in , at least , one modern programming language such as Scala , Java , Python , Perl Qualifications BA / BS in Computer Science , Information Systems or related technical field. 3+ years of experience in Data Engineering , with Cloud SW experience a plus. Strong analytical and problem - solving skills. Experience in designing and scaling data engineering , models , and pipelines Hands - on experience with a variety of data infrastructures , such as: Processing: Spark , Flink , Hadoop , Lambda Messaging: Kafka , Kinesis Storage: Hive , RDS , Athena , DynamoDB Machine Learning: Sagemaker , H2O , Keras , Open and active in sharing knowledge as well as excellent communication skills Programming experience in one or more application or systems languages: Scala , Java , Python Knowledge of deep learning framework such as TensorFlow , MXNet , PyTorch is plus Job Profile key Skills: Proficiency in , at least , one modern programming language such as Scala , Java , Python , Perl,
Keyskills :
java sql javascript jquery achinelearning itsupport deeplearning sqlserver softwaredocumentation datacollection softwareengineers computerscience problemsolving informationsystems dataengineering