Home
/
Comprehensive
/
Research Scientist, AI Safety and Alignment
Research Scientist, AI Safety and Alignment-March 2024
San Francisco
Mar 21, 2025
ABOUT GOOGLE
Our mission is to organize the world’s information and make it universally accessible and useful.
10,000+ employees
Technology
VIEW COMPANY PROFILE >>
About Research Scientist, AI Safety and Alignment

  Office locations: Also open to Mountain View and London.

  At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

  Snapshot

  Our team is responsible for enabling AI systems to reliably work as intended, including identifying potential risks from current and future AI systems, and conducting technical research to mitigate them. As a Research Scientist, you will design, implement, and empirically validate approaches to alignment and risk mitigation, and integrate successful approaches into our best AI systems.

  About Us

  Conducting research into any transformative technology comes with the responsibility to build mechanisms for safe and reliable development and deployment at every step. Technical safety research at Google DeepMind investigates questions related to evaluations, reward learning, fairness, interpretability, robustness, and generalisation in machine learning systems. Proactive research in these areas is essential to the fulfilment of the long-term goal of Google DeepMind: to build safe and socially beneficial AI systems.

  Research Scientists work on the forefront of technical approaches to designing systems that reliably function as intended while discovering and mitigating risks, in close collaboration with other AI research groups within and outside of Google DeepMind.

  The Role

  Key responsibilities:

  Identify and investigate possible failure modes for foundation models, ranging from sociotechnical harms (e.g. fairness, misinformation) to misuse (e.g. weapons development, criminal activity) to loss of control (e.g. high-stakes failures, rogue AI).

  Develop and implement technical approaches to mitigate these risks, such as benchmarking and evaluations, dataset design, scalable oversight, interpretability, adversarial robustness, monitoring, and more, in coordination with the team’s broader technical agenda.

  Report and present research findings and developments to internal and external collaborators with effective written and verbal communication.

  Collaborate with other internal teams to ensure that Google DeepMind AI systems and products (e.g. Gemini) are informed by and adhere to the most advanced safety research and protocols.

  About You

  You have extensive research experience with deep learning and/or foundation models (for example, a PhD in machine learning).

  You are adept at generating ideas and designing experiments, and implementing these in Python with real AI systems.

  You are keen to address risks from foundation models, and have thought about how to do so. You plan for your research to impact production systems on a timescale between “immediately” and “a few years”.

  You are excited to work with strong contributors to make progress towards a shared ambitious goal. With strong, clear communication skills, you are confident engaging technical stakeholders to share research insights tailored to their background.

Comments
Welcome to zdrecruit comments! Please keep conversations courteous and on-topic. To fosterproductive and respectful conversations, you may see comments from our Community Managers.
Sign up to post
Sort by
Show More Comments
SIMILAR JOBS
Manager, Global Clinical Data Standards – Delivery Management
Job Description Brief Description of Position: Working closely with the Global Clinical Data Standards (GCDS) Director of Delivery Management for either Data Collection Standards or Data Transformati
Customer Experience Banker - Berea, OH
Description Summary: Our branch banking roles offer a welcoming and inclusive team environment where you are empowered every day to help our customers achieve their financial goals. Our branch collea
Sr. Program Manager-Risk Management
The Sr. Program Manager-Risk Management will play a critical role in ensuring the Quality, Safety, and compliance of Philips’ Sleep & Respiratory (S&RC) products through leading the transform
Entry Level Lot Patrol Security $19
Entry Level Lot Patrol Officer - Weekly Pay! As a Security Officer with Securitas you will be responsible for the security and safety of property and personnel. You will be trained on all site-specif
AWS Cloud Engineer
As a leading payments company in healthcare, we guide, price, explain, and pay for care on behalf of insurers and their members. We’re Zelis in our pursuit to align the interests of payers, providers
SAP Basis Administrator
Job Title: SAP Basis Administrator Job Description Concentrix Catalyst is the experience design and engineering team of Concentrix, a leading global solutions company that reimagines everything CX th
Data Use Agreements (DUA) Contract Officer
Data Use Agreements (DUA) Contract Officer Business Affairs, Redwood City, California, United States Compliance Legal Post Date Jan 18, 2024 Requisition # 101982 Stanford University is one of Silicon
Federal Subcontracts Representative 3
Federal Subcontracts Representative 3 Date: Jan 13, 2024 Location: Overland Park, KS, US New York, NY, US Toledo, OH, US Houston, TX, US Virginia Beach, VA, US San Jose, CA, US Westlake, LA, US Troy,
Program Director-Production Readiness
The Program Director-Production Readiness will play a critical role in ensuring the Quality, Safety, and compliance of Philips’ Sleep & Respiratory products through leading Production, Integrated
Vice President, Senior Auditor II- Investment Services
Reference #: 48183Bring your ideas. Make history.BNY Mellon offers an exciting array of future-forward careers at the intersection of business, finance, and technology. We are one of the world's top
Copyright 2023-2025 - www.zdrecruit.com All Rights Reserved