Thousands of AI chatbots are running loose on the Internet. Most of them aren’t as dialed in as ChatGPT, nor do they have the safeguards to prevent adverse conversations from spinning out of control. Vice just reported a story of a man who took his own life after befriending a chatbot that coaxed him into destructive thinking.
Will the chatbot maker be held liable? Whether or not they are, I believe this is just the start of AI malpractice (for lack of a better term). And as with anything, there will likely be a market for AI-related liability insurance to handle this risk. More on that idea in the video below:
Links from the video: