OpenAI has announced the formation of a Safety and Security Committee. See OpenAI Board Forms Safety and Security Committee. This is after Ilya Sutskever and Jan Leike, who were the co-leads of the SuperAlignment project left.
What is notable is that this Committee is an offshoot of the board and has 4 board members on it, including Sam Altman. It sounds to me that the board will keep it on a short leash. I doubt it will have the independence to stand up to the board.
I also note that OpenAI’s ethics is no longer about “superalignment”, but is now about safety and security. With Sutskever and Leike gone they are no longer trying to build an AI to align superintelligences. OpenAI claims that their mission is “to ensure that artificial general intelligence benefits all of humanity” but how will they ensure such beneficence? They no longer have any like an open ethics strategy. Its now just safety and security.
I should add that OpenAI shared a safety update as part of the AI Seoul Summit.