Last week The Guardian ran with a story highlighting the potential discovery of artificial intelligence-generated mushroom foraging content on Amazon. Whether you trust a chatbot to guide you through the precarious world of wild mushroom identification is a separate question (possibly just as effective as Dave from down the pub?), but this article does once again highlight some of the challenges associated with the rise of AI:
- Who is responsible for the content produced?
- Is the content accurate, incorrect or harmful?
- Where has the content come from?
- Who owns that content?
- What will online marketplaces do in response?
The answers to these types of question are, of course, evolving as the technology and its use develops, and for more detail I would commend a read of my colleague Tom Lingard's recent "New Beings" collaboration with Zenitech on this topic (read here). Liability for AI-generated content is an interesting point: the default position taken by many generative AI platforms is that the content supplied is on an "as is" basis or similar - i.e. you can use it but don't expect to have any recourse against the platform if the content turns out to be incorrect or, worse, cause the type of damage we can only imagine comes with consuming a death cap mushroom by mistake. Fair enough? Time will tell, but for now a focus of the tech lawyer is to help clients using AI to accurately recognise and apportion the risks vs the undoubted benefits of doing so.