hireejobs
Hyderabad Jobs
Banglore Jobs
Chennai Jobs
Delhi Jobs
Ahmedabad Jobs
Mumbai Jobs
Pune Jobs
Vijayawada Jobs
Gurgaon Jobs
Noida Jobs
Oil & Gas Jobs
Banking Jobs
Construction Jobs
Top Management Jobs
IT - Software Jobs
Medical Healthcare Jobs
Purchase / Logistics Jobs
Sales
Ajax Jobs
Designing Jobs
ASP .NET Jobs
Java Jobs
MySQL Jobs
Sap hr Jobs
Software Testing Jobs
Html Jobs
IT Jobs
Logistics Jobs
Customer Service Jobs
Airport Jobs
Banking Jobs
Driver Jobs
Part Time Jobs
Civil Engineering Jobs
Accountant Jobs
Safety Officer Jobs
Nursing Jobs
Civil Engineering Jobs
Hospitality Jobs
Part Time Jobs
Security Jobs
Finance Jobs
Marketing Jobs
Shipping Jobs
Real Estate Jobs
Telecom Jobs

Staff Data Engineer

7.00 to 10.00 Years   San Francisco (California)   21 Nov, 2024
Job LocationSan Francisco (California)
EducationCA (Chartered Accountant)Any Graduate
SalaryAs per Industry Standards
IndustryAdvertising/PR/Event Management, Entertainment/Media
Functional AreaIT Software : Software Products & Services
EmploymentTypeFull-time

Job Description

See yourself at TwilioJoin the team as our next Staff Data Engineer on Twilios Segment product team.Who we are & why were hiringTwilio powers real-time business communications and data solutions that help companies and developers worldwide build better applications and customer experiences. Although were headquartered in San Francisco, we have presence throughout South America, Europe, Asia and Australia. Were on a journey to becoming a global company that actively opposes racism and all forms of oppression and bias. At Twilio, we support diversity, equity & inclusion wherever we do business.About the jobThe Data Engineering team at Twilio-Segment is the backbone of all data-driven decisions we make to move the business forward. We are seeking a highly skilled data engineer to join our team and help drive our development process.As a staff software engineer, you will partner with business stakeholders across the organization to identify pain points, gather requirements, and extract value out of our data. You will be responsible for designing, building, and maintaining pipelines to process terabyte scale datasets using both batch and streaming process techniques.You will also help optimize the design of our data warehouse as well as help teams build data-driven processes and automation on top of it.ResponsibilitiesIn this role, youll:

  • Design, build, and maintain data pipelines that collect, process, and transform large volumes of data from various sources into a format suitable for analysis.
  • Develop and maintain our data warehouse (Snowflake) to enable efficient and accurate analysis of data.
  • Document data pipelines, data models, and data transformation processes.
  • Collaborate with cross-functional teams to identify and understand data requirements for various business needs.
  • Work with data scientists to build our internal machine learning infrastructure.
QualificationsNot all applicants will have skills that match a job description exactly. Twilio values diverse experiences in other industries, and we encourage everyone who meets the required qualifications to apply. While having desired qualifications make for a strong candidate, we encourage applicants with alternative experiences to also apply. If your career is just starting or hasnt followed a traditional path, dont let that stop you from considering Twilio. We are always looking for people who will bring something new to the table!Required:
  • 7 years of experience in data engineering or related fields, with a strong focus on designing and building scalable data systems.
  • Experience in designing scalable data warehouses and working with modern data warehousing solutions, such as Snowflake.
  • Experience with data orchestration tools like Airflow and dbt, with a solid understanding of data modeling and ETL principles.
  • Experience with infrastructure-as-code tools (e.g., Terraform) and modern CI/CD pipelines.
  • Proven track record of delivering large-scale data projects and working in cross-functional teams.
  • Self-starter, ability to work independently and autonomously, as well as part of a team.
Desired:
  • Experience on building large scale distributed systems in AWS.
  • Experience with Python, Go, or/and Java.
  • Experience with streaming technology stack, such as Kafka or Kinesis.
  • Experience with managing and deploying machine learning models.
LocationThis role will be remote but is not eligible to be hired in San Francisco, CA, Oakland, CA, San Jose, CA, or the surrounding areas.TravelWe prioritize connection and opportunities to build relationships with our customers and each other. For this role, approximately

Keyskills :
data pipeline development data warehousing (snowflake) etl principles & data modeling infrastructure as code (terraform)machine learning integration orchestration java software engineer san python forms

Staff Data Engineer Related Jobs

© 2019 Hireejobs All Rights Reserved