Associate Professor Rob Nicholls from UNSW Business School cautions that when using ChatGPT for professional purposes, it’s important to be mindful of potential disclosure of company information during experimentation.
With the growing interest in artificial intelligence (AI), many people are now utilizing ChatGPT for various purposes, whether out of curiosity or as part of their unique selling proposition in business.
ChatGPT gained an impressive 100 million users in just 60 days after its launch. Notably, Microsoft has invested a substantial $US10 billion in OpenAI, the start-up behind ChatGPT, and has integrated GPT4 into Bing.
One of the key aspects of the user experience with OpenAI’s ChatGPT is its ability to generate highly useful text from specific prompts, offering potential time-saving opportunities in various work-related tasks such as writing emails, through machine learning capabilities.
For example, ChatGPT can even be used to generate the “perfect” cover letter using the job description provided in an advertisement. However, using ChatGPT at work comes with risks, particularly in terms of potential disclosure of sensitive company information.
For instance, if the job description includes information that could be used by a competitor to identify your business, the risks are significantly higher, especially if recruitment is an important part of your company’s insider business strategy.
Recent incidents, such as staff members inadvertently giving away sensitive material via ChatGPT, as experienced by Samsung, highlight the need to carefully consider the risks before using it at work.
This issue becomes even more challenging when ChatGPT is used as part of the code development process.
While it’s appealing to use ChatGPT for reducing coding time in software development projects, it’s important to note that the material used as prompts by ChatGPT to improve its answers can become part of the training set for future AI generations, such as GPT4.
Using ChatGPT to transcribe work meetings or generate meeting minutes before the meeting concludes may be convenient, but it’s crucial to be aware that the transcription may become part of the GPT4 ecosystem by the end of the meeting.
In fact, Italy has recently banned ChatGPT due to privacy concerns, citing a breach of the European General Data Protection Regulation. Although it’s likely that Italy may reverse this approach with a requirement for age verification (over 18) for users by the end of April, it underscores the need to be cautious with sensitive data when using ChatGPT.
Generative AI, like ChatGPT, uses vast amounts of data to create text on a predictive basis and improves based on user feedback. However, the challenge for businesses with curious users is that this feedback may inadvertently include company confidential material.
The solution in theory is simple: if material would not normally be disclosed outside of the business, it should not be used as a prompt for ChatGPT or for Bing. However, the practical difficulty lies in the fact that search engines, including generative AI like Google’s Bard, are essential business tools, and the distinction between providing information and providing answers may be blurry.
To determine what should and shouldn’t be shared with ChatGPT, a simple test is to consider whether the output of the ChatGPT session would normally be regarded as confidential by your business. If so, it should not be shared on ChatGPT.
Furthermore, if you have used ChatGPT to write your cover letter or resume, it’s worth noting that the AI system used to filter applicants could potentially run your text through GPTZero, an online tool from OpenAI that can detect whether text was written by a generative AI by examining its “perplexity” and “burstiness.”
In conclusion, while ChatGPT can offer valuable assistance in professional endeavors, it’s crucial to be mindful of potential risks and avoid sharing