Employers must consider expectations and permissions around AI products like ChatGPT
Media Contact: Barbara Fornasiero, EAFocus Communications, 248.260.8466, barbara@eafocus.com
Detroit—May 17, 2023—As if the working world hasn’t experienced enough change throughout the pandemic, employers now have to navigate the use of artificial intelligence. Some organizations may fully embrace the next generation technology while others are cautious. Whatever the stance, Deborah Brouwer, managing partner of Detroit-based management-side labor and employment law firm Nemeth Bonnette Brouwer, encourages employers to get ahead of ChatGPT and other artificial intelligence options while they still can.
“AI products like Chat GPT can bring opportunity to the workplace, but they also present new complications for employers managing their use,” Brouwer notes. “It would be prudent for employers to create policy guidelines for AI usage as it pertains to the company overall, but also drilling down to individual departments and jobs.”
When looking in particular at ChatGPT and its many capabilities, it should be understood that the technology also carries the risk of providing incorrect information, harmful and/or biased content, and limited knowledge beyond 2021. Taking these factors into consideration when creating AI usage policies is important, as are the following:
- Because it is a third-party software, there is no guarantee that any confidential information inputted into ChatGPT will be protected or remain confidential. A number of financial services companies have blocked use of ChatGPT’s website due to concerns about the security of company and client data.
- While ChatGPT is intelligent, so are humans. Recipients of ChatGPT writing may deduce that the written product they’re receiving was crafted by a non-human. Further, it’s still important to review anything produced by ChatGPT for accuracy and readability.
- Use of the software may present the risk of client mistrust or loss of reputation if clients learn that software was used rather than the individual or team they are paying to do the work.
- As companies enable ChatGPT for use, employees may question the viability of their position and employers may notice a lapse in employee quality of work and/or company loyalty.
- If a company doesn’t create or own the content, are copyright laws being appropriately followed?
- If an AI policy is put into place, what are the ramifications of non-compliance? How will the policy be monitored and enforced? Will there be a grace period for employees to understand both the actual guidelines and the spirit of the policy?
- Might the creations of ChatGPT contain biases or discrimination? The product should be carefully reviewed to ensure that it does not.
Brouwer acknowledges that artificial intelligence usage policies may prove to be among the most difficult personnel policies to write, but creating a policy now can mitigate or eliminate challenges down the road.
“It’s important to understand that employee handbook policies exist to address a particular issue as it currently stands, but policies are dynamic and can always be changed,” Brouwer reminds employers. “As ChatGPT and other artificial intelligence products emerge and evolve, it’s easier to update an existing policy rather than start from scratch after the negatives of not having a policy come to light. And employers probably can plan on having to update AI use policies often, given how rapidly the technology changes.”
About Nemeth Bonnette Brouwer PC
Celebrating more than 30 years, Nemeth Bonnette Brouwer specializes in employment litigation, traditional labor law, workplace investigations, and management consultation and training for private and public sector employers. The firm also provides arbitration and mediation services. Woman-owned and led since its founding, Nemeth Bonnette Brouwer exclusively represents management in the prevention, resolution and litigation of labor and employment disputes.
###