Back to Blog
The Rise Of Artificial Intelligence And Changing IP Standards

The Rise Of Artificial Intelligence And Changing Intellectual Property Standards

March 25, 2015

Two days ago in an interview with the Australian Financial Review, Apple co-founder, Steve Wozniak joined the ever growing list of science and technology billionaires who are concerned about the development of artificial intelligence (AI). In particular there is concern about what role humans will play in the future, once it is possible for AI to surpass us. In the interview, Wozniak said “if we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently.” In making this comment about the rise of AI Wozniak is in good company as he echoed the sentiments of both Dr. Stephen Hawking, Bill Gates and Elon Musk, founder of PayPal and Tesla. Wozniak was clear that even though it seems likely that the aforementioned will happen, that this should not dissuade innovation in the field of artificial intelligence.

With comments such as these, it begs the question ‘What will the impact of AI be on intellectual property law?’ Eran Kahana, in his article Intellectual Property Infringement by AI Applications delves into this is greater depth.

Kahana outlines the various levels of AI apps and highlights that only the more advanced apps, such as Level D, which “manifest[] intelligence levels so sophisticated that it can identify and reprogram any portion of its behaviour”, could be problematic when it comes to enforcing IP rights. Kahana also notes that the current formulation of IP law, when it comes to infringement, presumes human involvement. The example Kahana provides is of a spider that has misused protected content leading to the developer or designer being subject to suit. It is argued that this strict liability standard is problematic given that the developer or designer could not have reasonably foreseen culpability where a Level D app has reprogrammed its own behaviours. An iterative liability standard is proposed whereby the developer is only responsible where it cannot be shown that the AI acted independently. Kahana does note that even this standard can be problematic once the AI learn to avoid detection.

To read more of Kahana’s article please visit:  http://web.stanford.edu/dept/law/ipsc/PDF/Kahana,%20Eran%20-%20Abstract.pdf