Link: U.K. agency releases tools to test AI model safety

The U.K. Safety Institute, the U.K.’s recently established AI safety body, has released a toolset designed to “strengthen AI safety” by making it easier for industry, research organizations and academia to develop AI evaluations. Called Inspect, the toolset — which is available under an open source license, specifically an MIT License — aims to assess certain capabilities of AI models, including models’ core knowledge and ability to reason, and generate a score based on the results.  #


Yoooo, this is a quick note on a link that made me go, WTF? Find all past links here.