1 min read

Link: Anthropic will start training its AI models on chat transcripts

Anthropic is updating its AI training and data retention policies, allowing it to use new user-generated data unless users actively opt out. These changes, including a five-year data retention period, affect all consumer tiers but exclude commercial accounts.

All users must decide whether to accept these terms by September 28th.

New users will make their choice during the signup process, while existing users are prompted via a pop-up. A simple, often overlooked toggle in the pop-up allows users to opt out if desired.

Accidentally hitting "Accept" could be easy without fully understanding the implications, as the toggle to use data is set to "On" by default. Changes can be made anytime in the Privacy Settings, but only affect future data use.

Anthropic ensures they do not sell user data and employ measures to protect sensitive information in training their AI models.

For more details, users should consult Anthropic’s updated blog post or adjust settings directly if they have already agreed to the terms. #

--

Yoooo, this is a quick note on a link that made me go, WTF? Find all past links here.