hireejobsgulf

Senior Azure Big Data Engineer

1.00 to 10.00 Years   Hyderabad, Pakistan   04 Jan, 2024
Job LocationHyderabad, Pakistan
EducationNot Mentioned
SalaryNot Mentioned
IndustryOther Business Support Services
Functional AreaNot Mentioned

Job Description

Position Summary:Role Value Proposition:Work and collaborate with a nimble, autonomous, cross-functional team of makers, breakers, doers, and disruptors who love to solve real problems and meet real customer needs. You will be using cutting-edge technologies and frameworks to analyze data, to help create the pipeline, and collaborate with the data science team to enable the innovative work in machine learning and AI. Eagerness to learn new technologies on the fly and ship to production. Knowledge in data science is a plus.More than just a job we hire people who love what they do!Job Responsibilities:1. Building and Implementing data ingestion and curation process developed using Big data tools such as Spark (Scala/python), Data bricks, Delta lake, Hive, Pig, Spark, HDFS, Oozie, Sqoop, Flume, Zookeeper, Kerberos, Sentry, Impala etc.2. Ingesting huge volumes data from various platforms for Analytics needs and writing high-performance, reliable, and maintainable ETL code.3. Monitoring performance and advising any necessary infrastructure changes.4. Defining data security principals and policies using Ranger and Kerberos.5. Assisting application developers and advising on efficient big data application development using cutting edge technologies.6. Lead, coach, support, and mentor team members ? set up performance goals, review their work as required, provide adequate guidance, feedback to help them achieve their goals and do right for EnterpriseKnowledge, Skills and Abilities:Experience Required:ÿ

  • Bachelors degree in Computer Science, Engineering, or related discipline.
  • ÿ8 + years of solutions development experience
  • Proficiency and extensive Experience with Spark & Scala, Python and performance tuning is a MUST
  • Hive database management and Performance tuning is a MUST (Partitioning / Bucketing)
  • Strong SQL knowledge and data analysis skills for data anomaly detection and data quality assurance.
  • Strong analytic skills related to working with unstructured datasets.
  • Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
  • Experience in any model management methodologies.
Required:
  • Proficiency and extensive experience in HDFS, Hive, Spark, Scala, Python, Databricks/Delta Lake, Flume, Kafka etc.
  • Analytical skills to analyze situations and come to optimal and efficient solution based on requirements.
  • Performance tuning and problem-solving skills is a must
  • Hive database management and Performance tuning is a MUST. (Partitioning / Bucketing)
  • Hands on development experience and high proficiency in Java or Python, Scala and SQL
  • Experience designing multi-tenant, containerized Hadoop architecture for memory/CPU management/sharing across different LOBs
Preferred:
  • Proficiency and extensive Experience with Spark & Scala, Python and performance tuning is a MUST
  • Hive database management and Performance tuning is a MUST. (Partitioning / Bucketing)
  • Strong SQL knowledge and data analysis skills for data anomaly detection and data quality assurance.
  • Knowledge in data science is a plus
  • Experience with Informatica PC/BDM 10 and implemented push down processing into Hadoop platform, is a huge plus.
  • Proficiency is using tools Git, Bamboo and other continuous integration and deployment tools
  • Exposure to data governance principles such as Metadata, Lineage (Colibra/Atlas) etc.
Knowledge and Skills:? Proficiency and extensive experience in Data analysis? Solid background on writing SQLs and using other exploratory tools.? Extensive experience writing Functional requirements and High-level design documents.? Experience in writing validation scripts to validate the data, data integrations and ETL transformations.? Analytical skills to analyze situations and come to optimal and efficient solution based on requirements.? Strong background or experience working in Insurance Industry.? SQL tuning and DW concepts.? Strong analytic skills related to working with unstructured datasets.MetLife:MetLife, through its subsidiaries and affiliates, is one of the world?s leading financial services companies, providing insurance, annuities, employee benefits and asset management to help its individual and institutional customers navigate their changing world. Founded in 1868, MetLife has operations in more than 40 countries and holds leading market positions in the United States, Japan, Latin America, Asia, Europe and the Middle East.We are ranked #44 on the Fortune 500 list for 2019. In 2019, we were named to the Dow Jones Sustainability Index (DJSI) for the fourth year in a row. DJSI is a global index to track the leading sustainability-driven companies. ÿ ÿ ÿMetLife is committed to building a purpose-driven and inclusive culture that energizes our people. Our employees work every day to help build a more confident future for people around the world.MetLife is a proud Equal Employment Opportunity and Affirmative Action employer dedicated to attracting, retaining, and developing a diverse and inclusive workforce. All qualified applicants will receive consideration for employment at MetLife without regards to race, color, religion, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity or expression, age, disability, national origin, marital or domestic/civil partnership status, genetic information, citizenship status, uniformed service member or veteran status, or any other characteristic protected by law.

Keyskills :

APPLY NOW

Related Jobs

© 2023 HireeJobsGulf All Rights Reserved