Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Rajkot |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | Oil & Gas / Petroleum |
Functional Area | DBA / Datawarehousing |
EmploymentType | Full-time |
- Write effective, scalable code- Develop back - end components to improve responsiveness and overall performance- Integrate user - facing elements into applications- Implement the application s CICD pipeline using the AWS CICD stack.- Test and debug programs- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies.- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data - related technical issues and support their data infrastructure needs.- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.- Create ScalaSpark jobs for data transformation and aggregation- Produce unit tests for Spark transformations and helper methods- Write Scaladoc - style documentation with all code- Design data processing pipelines- Set up a monitoring stack.- Improve functionality of existing systems- Implement security and data protection solutions- Assess and prioritize feature requests- Coordinate with internal teams to understand user requirements and provide technical solutions- Work experience as a Senior Data Engineer- - Experience with the core AWS services, plus the specifics mentioned in this job description.- Expertise in at least one popular Python framework (like Django, Flask or Pyramid)- Past experience with the server - less approaches using AWS Lambda is a plus. For example, the Serverless Application Model- Knowledge of object - relational mapping (ORM)- Familiarity with front - end technologies (like JavaScript and HTML5)- Team spirit, Leadership, Fluent English Communication- Good problem - solving skillsRequired Education: Any Graduate With Relevant ExperienceNo Of Position:Working Hours: 10:00 AM To 07:30 PMLunch Break: 1:30 PM To 2:30 PMJob Opening Status: Key Skills for Sr Data Engineer JobsData,Python,Apache Spark,Big Data,Flask,Django,AWS,
Keyskills :
sqljava data warehousinginformatica pythoncontinuous improvement facilitation web appbig data