Employees Should Be Required To Disclose When They Have Used AI To Create A Work Product, And Identify What Part Of The Work Product Was Generated By AI
There can be little doubt that the technology buzzword du jour is AI or artificial intelligence. Pardon us for appearing cynical, but we have seen this road show before too many times. Does anyone recall when blockchain technology was going to change the world?
It seems to us that the term AI, as it is used today, simply reflects that computers are becoming increasingly powerful and therefore can run software capable of making decisions in real time that were impossible only a few years ago.
One of the earliest definitions of artificial intelligence was formulated by Alan Turing, whom many think of as the father of the modern computer (and who deciphered the German enigma machine during the Second World War, considerably aiding the war effort). His definition was that if you were having a conversation with someone who was hidden behind a screen and you couldn’t tell if the person was a machine or a human, then that machine was artificially intelligent.
OK, we are oversimplifying this definition, but it gives you the general idea.
We are not there yet. Having said this, there are many programs available today that provide a good semblance of intelligence. As with all new technology trends, there is considerable pressure to introduce them in the workplace.
Slack recently did a study of more than 10,000 desk workers around the globe and discovered that 81 per cent of executives feel some urgency to implement AI in their job and 50 per cent felt a high degree of urgency to do so.
The bigger problem though is that many employees are already using AI technology, all on their own, to assist in their jobs. This Slack survey indicates that fully 25 per cent of desk workers have already utilized or attempted to utilize AI in their workplace.
This can lead to a multitude of problems. First of all, AI still isn’t all that “I.” Anybody who has used it quickly learns that some pretty strange results can occur. In many cases AI programs just make stuff up!
In legal circles, there is already a legendary story of two lawyers in New York, Steven Schwartz and Peter De Lucca, who filed an AI-generated brief that contained completely fictitious cases and named non-existent judges. Their brief certainly sounded good but was completely made up by the program they used.
In Canada, B.C. lawyer Chong Ke used AI to prepare briefs that were used in a family law case in the B.C. Supreme Court. Once again, the cases cited were completely concocted.
This is problem number one: employees using AI to do work that superficially looks good but is in fact complete nonsense. Imagine this percolating throughout your organization, as various employees produce garbage work that is then relied upon by others and their AI, further exacerbating the issue.
Problem two: employees taking credit for work that isn’t theirs. Suppose that the work product created by AI is actually pretty good. Should the employee get full credit? How can you even evaluate employees who are using AI to complete their work?
If you are going to choose between employees for a promotion or for awarding a bonus, how do you know who is the better candidate as opposed to who has access to better AI?
In order to deal with these and other issues, we strongly suggest that employers have a policy in place regarding AI usage.
At a minimum, employees should only be allowed to use AI products which have been vetted and approved by the company.
Employees should also be required to disclose when they have used AI to create a work product, and identify what part of the work product was generated by AI.
Finally, AI-generated work must always be reviewed, ideally by a human.
There are undoubtedly many other issues that would be covered by such a workplace policy, depending on the industry involved and the nature of the work being performed. Using AI to assist in writing comic books will have different issues than using AI to design aerospace parts used in commercial passenger aircraft.
Failure to implement such a policy might result in complete chaos as individual employees using different AI tools produce work of varying quality with no way of knowing if the work was done by a human or a machine.
Let’s start with this rule: do not have your AI policy generated by AI!
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
Authors: Howard Levitt, Peter Carey
Levitt Law