Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Bangalore |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | Management Consulting / Strategy |
Functional Area | Sales / BD |
EmploymentType | Full-time |
Location : Bangalore , IndiaSapiens International Corporation (NASDAQ and TASE : SPNS) is a leading global provider of software solutions for the insurance industry , with an emerging focus on the broader financial services sector . We offer core , end-to-end solutions to the global general insurance , property and casualty , life , pension and annuities , and retirement markets , as well as business decision management software . We have a track record of over 30 years in delivering superior software solutions to more than 400 financial services organizations . The Sapiens team of approximately 2 , 500 professionals operates through our fully-owned subsidiariesClick the following link to learn more about Sapiens India https : / / www . youtube . com / watchv=VK66N3y8-ck Big Data Engineer / Architect Responsibilities Candidate should be able to architect highly scalable distributed systems / big data solutions , using different open source tools . Candidate should be able to design , develop , load , maintain and test large-scale distributed systems . Should be able to focus on analysing and visualizing large sets of data to turn information into insights using multiple platformsTranslate complex functional and technical requirements into detailed design . Should be able to install , configure and support Big Data tools . Maintain security and data privacy . Propose best practices / standards . Being a part of a POC effort to help build new big data clusters . Skills required : Min 5 yrs . experience working with the major big data solutions like Hadoop , MapReduce , Hive , Impala , HBase , MongoDB , CassandraShould have expertise in back-end programming , specifically java , JS , Node . js , Linux , PHP , Ruby , Python and / or R . Hands-on experience in Kafka & Spark - Mandatory Should have experience working with other big data solutions like Oozie , Mahout , Flume , ZooKeeper and / or Sqoop . Should have experience with object-oriented analysis & design (OOAD) , coding and testing patterns . Should have a good understanding of cluster and parallel architecture as well as high-scale or distributed RDBMS and / or knowledge on NoSQL platformsShould be able to guide and train junior developers on Bid Data platform tools and technologies . Should be expert in data warehousing solutions . To be proficient in designing efficient and robust ETL workflows;To be able to write high-performance , reliable and maintainable code; able to write MapReduce jobs and can write Pig Latin scripts . Analytical and problem-solving skills applied to Big Data domainHas a Bachelor s or Master s degree in computer science or software engineeringHas excellent written and verbal communication skills,
Keyskills :
buildbusinesspropertyfinancialservicesinsurancegeneralinsuranceifeinsurancesalestrainpensioninsurancedomain