ChatGPT Policy for your organisation: The CISO’s concern

not your CISO
3 min readApr 9, 2023
Photo by ilgmyzin on Unsplash

ChatGPT has been a buzzword for some time now. Be it a new product based on ChatGPT or improvements in ChatGPT itself, just when I think that hype is over, I hear about advancements in AI. I am excited about what’s next in the AI world, but I am writing about the privacy concerns I have been reading about the information provided to the ChatGPT while using it. According to ChatGPT’s privacy policy -

OpenAI Privacy Policy Screenshot
OpenAI Privacy Policy Screenshot

This policy is a no-brainer. Any CISO would be concerned about it. What if the users share sensitive information while asking ChatGPT questions? What if business strategies are mentioned in the question asked? What if..? There are lots of possibilities for such scenarios.

If you think I am just making up scenarios, look at what happened at Samsung — Whoops, Samsung workers accidentally leaked trade secrets via ChatGPT.

So, what should you do?

Should you ban ChatGPT?

You can, but I would not recommend it. ChatGPT can be a useful tool to enhance productivity and encourage innovation, and learning at your organisation.

Should you regulate it?

There is no way to regulate it. I don’t think any organisation has the resources and time to monitor and regulate the type of information provided to ChatGPT by their employees. I am not saying this is technically not possible, but it would be too much resource intensive. For example, openAI APIs do not use the information provided but it would be too expensive to build a custom ChatGPT solution for your organisation.

So what should you do?

ChatGPT should not be treated as something special. This should be considered as another tool you would use to check off your work to-do list efficiently. This is just another security awareness topic which should be highlighted more nowadays.

It’s time to revisit your Acceptable Usage Policy or Acceptable Usage Guidelines and make the amends. Define what would be the acceptable usage of chaptGPT for your organisation. You can define a guideline stating that no sensitive information should be shared with the ChatGPT while asking questions. But here is the trick, this can not be a vague statement. Make it specific and mention what is the — ‘sensitive information’. You would have to be specific about this because the information which is provided to ChatGPT can seem to be non-sensitive for the given question but can be sensitive in the larger context when ChatGPT connects the dots for the information over a long period. You can specify the ‘sensitive information’ for various departments as well, for example, the sales team should not include client names, engineering teams should not include API keys while asking questions etc.

So, now that you have a policy, how would you enforce it?

  • Build a strong awareness program specifically for ChatGPT usage.
  • Send information mailers and conduct training for awareness periodically.
  • Include it in the onboarding information security training for the new joiners.
  • Ask questions about the ChatGPT usage in the information security awareness assessments and assess the awareness levels.
  • Once, you have the ChatGPT-specific acceptable usage training rolled out, make it a part of the regular information security training and integrate it with your existing information security awareness program.

I think this is one of the ways you can spread awareness about the safe usage of ChatGPT without being paranoid about it.

I would love to hear your opinions as well, how you are managing this risk in your organisation. Feel free to reach out or comment on this post.

--

--