1 min read

Link: California bill would make AI companies remind kids that chatbots aren’t people

California Senator Steve Padilla has introduced SB 243, aimed at regulating AI chatbots to safeguard children. This bill mandates AI firms to remind young users intermittently that they are interacting with machines, not humans.

The proposed legislation also seeks to restrict "addictive engagement patterns" employed by these companies. It requires an annual disclosure to the State Department of Health Care Services about detections of and discussions around suicidal thoughts among minors.

Further, the bill would compel AI companies to warn users that their chatbots may not be suitable for all children. This comes in the wake of serious allegations against Character.AI, citing the potential dangers of AI chatbots.

Last year, a lawsuit claimed that Character.AI's chatbots contributed to a teen's suicide, tagging the AI as "unreasonably dangerous." Another legal complaint accused the company of exposing teens to "harmful material."

In response to growing concerns, Character.AI is enhancing parental controls and refining their AI to prevent inappropriate interactions with teen users. These modifications aim to filter out "sensitive or suggestive" content.

Senator Padilla emphasizes the need for "common sense protections" to prevent AI developers from exploiting young users through addictive and harmful strategies. As scrutiny over social media safety intensifies, AI chatbots may face increased legislative attention. #

--

Yoooo, this is a quick note on a link that made me go, WTF? Find all past links here.