Home WORK ChatGPT for Employers: Contemporary Marvel or Legal Minefield

ChatGPT for Employers: Contemporary Marvel or Legal Minefield

Disclaimer: *Some parts of this article were written by ChatGPT as part of demonstrating its capabilities

0
Image by D Koi

The usage of chatbots like OpenAI’s ChatGPT has grown in popularity as businesses seek innovative ways to save costs and boost output with the help of cutting-edge technology. ChatGPT helps employees be more efficient by assisting them with tasks encountered in their day-to-day jobs.

Over 40% of professionals polled by Fishbowl (a social platform owned by employer review site Glassdoor) have used ChatGPT at work. Most used the Artificial Intelligence tool without telling their bosses. Despite its many advantages, ChatGPT presents legal, commercial, and reputational problems in areas including privacy and consumer safety. Employers should thus assume usage by employees and take practical measures to regulate it.

What is ChatGPT?

ChatGPT is a form of artificial intelligence (AI) developed by OpenAI and “*designed to be as natural and human-like as possible.” It is designed to understand and respond to a wide variety of questions and topics, ranging from general knowledge to specific domains, including science, technology, history, and literature. In order to achieve this, ChatGPT is trained on a “*massive corpus of text data to understand the nuances and complexities of natural language.”

How Employees are using ChatGpt at Work?

  • First Drafts: Speeches, memoranda, cover letters, and even mundane emails may all be drafted with the help of ChatGPT. When asked to write this article, ChatGPT came up with several helpful suggestions, including “*discuss the potential legal and ethical challenges associated with ChatGPT, such as the spread of disinformation, cyberbullying, and privacy concerns.”
  • Editing Documents: ChatGPT excels in text editing because it is a language model that was taught using millions of papers. ChatGPT is being used by employees to improve the readability of poorly written texts by correcting grammatical mistakes and enhancing clarity.
  • Generating Ideas: Due to its extensive knowledge base and ability to make connections, ChatGPT may be a valuable tool for idea generation. The model may be prompted by the user with a question or subject, and a response can be generated based on the model’s knowledge of the question or topic and the information it has access to.
  • Coding: Many programmers claim that ChatGPT has greatly increased their efficiency and productivity in two areas: code generation and code verification.

What Are the Main Risks of ChatGPT for Employers?

  1. Quality Control Risks:

ChatGPT was introduced with the following warning: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical responses.” Essentially, this suggests that the information it generates may be presented as fact when it is inaccurate. Moreover, because the system was initially designed using data sets from 2021, the output may be outdated.

2. Legal Risks:

Regarding copyright infringement, the information used to develop ChatGPT came from the internet, and some of this content may be protected by copyright. Therefore, if the chatbot utilises this content to generate a response to a query, it may constitute an infringement of copyright. If the content is then used by an employee, the employee and the employer may be subject to a copyright infringement/intellectual property claim.

3. Security & Privacy:

Cyberhaven found that 11% of what workers are putting into ChatGPT is confidential information. Confidential or sensitive data uploaded to ChatGPT has the risk of being misused. Therefore, any sensitive data that a worker enters into the system runs the risk of being mined for future enhancements.

Ban vs Formulate Policies

Companies on Wall Street have begun to take a tougher stance. Citigroup Inc, Deutsche Bank AG, Wells Fargo & Co, and Goldman Sachs Group, Inc. are among the financial institutions that have prohibited their employees from using ChatGPT. They are, nevertheless, a minority. So far, just 3% of HR executives polled by Gartner have reported that they have completely shut down the use of ChatGPT in the workplace.

On the other hand, some companies, like Citadel, are trying to get a company-wide licence for the software. Meanwhile, Microsoft Corp released a new version of its Office productivity suite that includes OpenAI’s GPT-4 artificial intelligence model natively in Excel, PowerPoint, Outlook, and Word. Twenty businesses, including eight Fortune 500 corporations are presently testing the software.

Others surveyed by Gartner have adopted a middle ground, neither outright banning nor disregarding the chatbot, but instead cautioning employees that the chatbot’s responses aren’t always accurate or private and may be studied to determine if they were AI generated. For instance, Amazon has sent cautions to employees telling them not to use ChatGPT to store private information.

What Should Employers in Malaysia Do?

Nonetheless, it would be prudent for companies in Malaysia to analyse the potential risks and implement some measures to mitigate any harm that may be caused to the company in the future, given the threats to accuracy, data security, and privacy. Associate Professor Dr Selvakumar Manickam of Universiti Sains Malaysia’s Centre for Cybersecurity Research emphasised the need of encouraging company policies that promote the efficient usage of ChatGPT.

The rules for ChatGPT will vary based on the field or kind of work being done. We need to recognise its benefits and restrict its drawbacks, just as we do with every new technological system or communication language. Following the wise words of Datuk Koong Lin Loong (Treasurer-General, Associated Chinese Chambers of Commerce and Industry of Malaysia (ACCCIM);

“You can have a pen knife, which is used to open letters and boxes, but if it is misused, the knife can be dangerous. So how we use it is important.”

Exit mobile version