About the Role
Location: Hyderabad
The Snr. Specialist DDIT APD HPC DevOps will be a core member of a F1 Foundry team supporting the data42 HPC platform maintenance and ensuring delivering per the roadmap / vision laid out.
Your responsibilities include but are not limited to:
• Responsible for design & development of features required in Analytics platform (HPC)
• Collaborate with team to understand the scientific analysis activities and help them in getting rid of any issues that they come across with their scientific analysis.
• Responsible to develop and deploy new prototypes based on the strategy / roadmap laid upon. Responsible in delivering the roadmap activities of analytics platform (HPC)
• Understand, analyse and address user encountered issues in the Analytics platform. Understand the whole eco system HPC design and responsible to deal with developing new features / changes.
• Responsible for documentation for new additional features developed or change requests / fixes. Responsible for adopting to DevOps practices / tools Have good understanding on DevOps implementations.
• Evaluate, validate and optimize Analytics platform to reduce costs. Responsible to design / develop new solutions in the light of Analytics platform using AWS cloud and tool stack.
• Design & Implement integration with Novartis Internal, External Systems, F1 AWS platform. Provide level 4 internal technical support for the overall Analytics platform and provide support/advice to product and services teams when required.
• Coordinating with Quality Engineer to ensure all quality controls, naming convention & best practices have been followed.
Diversity & Inclusion / EEO
We are committed to building an outstanding, inclusive work environment and diverse teams representative of the patients and communities we serve.
Role Requirements
• 6+ years IT experience, 4+ years' experience in Analytics platform (HPC), 2+ years' experience in Big Data platform, 2+ years' experience in DevOps tools
• Hands-on experience on AWS cloud stack and more importantly around AWS services (EC2, S3, EKS, FsxLustre, Terraform, AMIs). Experience in Slurm architecture and configurations
• Good understanding in AWS parallel cluster know-how. Proficiency in at least one programming language such as Python or R. Good experience in systems engineering
• Implementation experience in DevOps tools / Practices. Knowledge of software engineering principles and ability to write clean, maintainable code. Strong problem-solving and analytical skills with a keen attention to detail.
• Good communication and teamwork skills, with the ability to work effectively in a collaborative environment. Hands-on in programming languages primarily Python, Unix shell scripts, PySpark, Spark
• Strong in API based architecture and concept. Software engineering experience (versioning, Scrum, testing, collaboration good practices)
Division
Operations
Business Unit
DATA, DIGITAL & IT
Location
India
Site
Hyderabad, AP
Company / Legal Entity
Nov Hltcr Shared Services Ind
Functional Area
Technology Transformation
Job Type
Full Time
Employment Type
Regular
Shift Work
No