Generative AI is suddenly everywhere. Students are using it to help with schoolwork. Teachers are using it to draw up lesson plans. Employees at all levels – and at organizations of every size – are finding ways to use it to make their work more efficient. But is your organization prepared for the risks of exposing confidential data that generative AI brings? Have you created a set of generative AI policies and made your staff – and vendors, consultants, and so forth – aware of them? If not, here are some tips for doing just that.
Before you even get started setting the rules, the first step would be to outline, in great detail, the level of sensitivity of all your organization’s information. Essentially, it needs to be clear to staff what information can and cannot be fed into generative AI tools like ChatGPT.
Consider creating a chart that defines the level of sensitivity of a particular class of data and provides examples. I recently saw an example of such a chart. Here’s how that Foundation broke it down:
This type of data is fairly obvious. If the public can already find it on the internet, it’s safe to upload it to a generative AI tool. Examples include published news stories, public domain info, newsletters, audited financials, published lists of grants, and so on.
While not necessarily “out in the public” already, there is plenty of data that you may feel is safe for staff to share with AI tools. Basically, anything that does not reveal identifiable individual information OR proprietary organizational information is okay to feed into these tools. Examples include anonymized grantee data, non-sensitive internal communications – think holiday calendar and the like – and departmental policies if – and only if – approved by the department head.
Clearly, this is the section of the table that is most important, since it’s the What Not to Share section.
This includes:
Any and all personal information of employees, vendors, etc. (personal contact info, SSNs, health information, login credentials, HR data like performance reviews, VIP contact info.)
Legal documentation subject to Attorney-client privilege
Contractual information
Any info subject to third-party confidentiality requirements or expectations
Information about organization processes or products of a proprietary nature, including investment information, partner evaluations, etc.
Source code of any custom applications within the organization
Undisclosed (not published) grant documentation
First, explain the principles behind the policies, so your staff understands fully why they are necessary. Go over both the positives and negatives of using AI tools. Explain the various risks of harm, including:
exposing private data
the potential for offensive, erroneous, and/or biased outputs by AI
violating intellectual property rights
Consider a section that provides guidance as to how AI tools are best utilized. Examples could be:
drafting an email
summarizing a meeting
creating an agenda
making a first draft of a (non-confidential) document.
As one foundation explained in the guidance section of their new policy, don’t input or “ask” an AI tool anything you wouldn’t want to see as a headline in a newspaper! And beyond this best-use advice, remind employees they and they alone are responsible for both fact-checking the output and removing any information that violates the rules outlined in your policies.
Using the categories I provided in the Data Sensitivity section earlier, detail specifically what is restricted from being uploaded to generative AI. Also, consider providing a list of the AI tools that are pre-approved for use by your organization and instructing staff to use only those. (Any other tools could be subject to approval by your head of IT.)
Determine and then spell out consequences/repercussions for lack of compliance.
Obviously, these generative AI policies are ineffective if your staff aren’t aware of them. But how best to ensure they’ve read and understood these new rules? Or that they even know enough about AI tools to understand what all this fuss is about? Consider training sessions: these could be added to your regular cybersecurity training, or (considering the novelty and uniqueness of some AI tools) given as unique, separate training. For best results, be sure to make this training interactive, fun, and include quizzes. This ensures that everyone is paying attention.
As these generative AI tools are a hot product right now – and for the foreseeable future – know that new ones will be hitting the internet regularly. And the ones already out there will be constantly evolving. For that reason, it is critical your generative AI policy and training evolve to keep up. Don’t assume that the policy and training you provide today will include everything available 6 months from now.
The time to create these generative AI policies is now. Don’t wait until your organization’s sensitive information is exposed and there’s no going back!
You can contact us directly or visit our office from Monday to Friday
Goldlink House, 2 Harare Steet, Off Rabat Street, Zone 6, Wuse, Abuja.
8AM - 5PM
info@techspecialistlimited.com
092911443