Why Every Organization Needs a Generative AI Usage Policy Right Now

Interview with Anthony DeSimone, Owner of You’re the Expert Now LLC and Generative AI Specialist

Anthony DeSimone has worked with hundreds of organizations across industries, helping leaders safely adopt Generative AI while avoiding the pitfalls of Shadow AI. In this interview, DeSimone shares why a formal AI usage policy is essential and outlines the structure every company should follow. Each “Tip” comes directly from his experience guiding companies through their first generative AI usage policies.

The Growing Risk of Shadow AI

Surveys show roughly 42% of employees are using generative AI tools like ChatGPT at work, often without informing leadership.

“This shadow use is where the biggest risks come from,” DeSimone explains. “If leaders don’t provide guardrails, employees will make their own rules.”

A 2024 study by Harmonic Security revealed that over 4% of prompts and more than 20% of uploaded files to AI platforms contained sensitive corporate data. According to DeSimone, “These aren’t rare slipups. They’re a clear warning sign that companies must act.”

Real-World Examples

  • Samsung (2023): Engineers accidentally uploaded proprietary source code to ChatGPT, leading to a company-wide ban.

  • Small business incident: An employee used an AI tool to process client contracts and inadvertently shared confidential client information.

“These stories repeat themselves across industries,” DeSimone says. “What they all have in common is the absence of a clear, enforceable policy.”

What a Robust AI Usage Policy Needs to Include

“A strong AI usage policy does two things,” DeSimone explains. “It protects the organization from risk, and it empowers employees to use AI responsibly and effectively.”

1. Definitions

The policy should start with clear definitions to avoid confusion. Examples include:

Generative AI, Confidential Data, Proprietary Information, Approved Tools, Hallucinations, Shadow AI, Large Language Model (LLM), Prompt, Training Data, Token, Bias, Data Privacy.

Tip (DeSimone): “These are just starting points. Customize the list to your organization. The goal is alignment—everyone needs to be speaking the same language.”

2. Approved Tool List (Overview)

Clearly state that employees may only use tools on the Approved Tool List.

Tip (DeSimone): “If your team is new to AI, or if you’re handling highly confidential information, keep it simple. Only allow tools approved for confidential data use. If it’s not on the list, it’s off limits.”

3. IT Governance: Acceptable vs. Unacceptable Use

IT should define boundaries in plain language:

  • Approved Data: Public marketing materials, website copy, FAQs, research tasks.

  • Restricted Data: PII, client-attorney privileged docs, HR records, financial account numbers.

  • Security Standards: Encryption, storage, and vendor retention rules.

  • Vendor Monitoring: Regular review of vendor practices and updates.

Tip (DeSimone): “IT must set minimum security and encryption standards. Without guardrails, employees will assume any system is safe.”

4. New Tool Approval Process

Employees will discover new tools. Create a process:

  1. Submission: Employee provides tool name, purpose, and use case.

  2. Evaluation: IT reviews security, Legal reviews compliance.

  3. Decision: Tool is approved, restricted, or denied.

  4. Communication: Documented and added (or excluded) from appendix.

Tip (DeSimone): “By documenting a process, you show you’re open to innovation—but responsibly. That prevents shadow AI and builds trust.”

5. Training and Accountability

Training should cover:

  • How to use approved tools.

  • What data is safe.

  • How to verify output.

  • Red flags for misuse.

Employees sign off, and managers enforce compliance.

Tip (DeSimone): “Training must be ongoing. Teach both features and risks. Pair that with accountability so employees know rules apply to them.”

6. Risks of Using Generative AI

Common risks include:

  • Confidentiality Breach

  • Hallucinations and Inaccuracy

  • Bias and Fairness Issues

  • Over-Reliance

  • Legal Violations

  • Reputation Damage

Tip (DeSimone): “I call this the ‘People First, People Last’ philosophy. Humans must be involved at the beginning to frame the prompt and at the end to review the output. AI should support people—not replace their judgment.”

7. Consequences of Non-Compliance

Make consequences clear:

  • First offense: retraining and written notice.

  • Repeated offenses: suspension of access.

  • Severe misuse: up to termination.

Tip (DeSimone): “Consequences should be fair. Early violations focus on education. Repeated or intentional ones require stronger action.”

Appendix: Approved Tool List

The appendix is a living record of approved tools. Include:

  • Tool Name (e.g., ChatGPT, Gemini, Copilot).

  • Model/Version (e.g., GPT-4, Gemini 1.5 Pro).

  • Plan Type (e.g., Enterprise).

  • Confidential Use Allowed? Yes/No.

  • Approved Use Cases (e.g., marketing, research).

Update Schedule: Update at least monthly.

Tip (DeSimone): “Be specific. Don’t just say ‘ChatGPT’—say ChatGPT Enterprise. Otherwise employees may assume the free version is okay.”

Summary

  • Shadow AI use is widespread and risky.

  • A strong policy protects companies and empowers employees.

  • Policies must define terms, limit tools, and train staff.

  • Risks include confidentiality breaches, hallucinations, bias, and over-reliance.

  • Enforcement and consequences must be clear.

FAQs

Q1. Why is a generative AI usage policy necessary?
Because employees already use AI at work, often without approval. Without guardrails, this creates confidentiality, compliance, and reputational risks.

Q2. What is shadow AI?
Shadow AI is when employees use AI tools without company approval or oversight.

Q3. How can companies ensure safe AI use?
By approving tools, training employees, setting IT guardrails, and updating policies regularly.

Q4. What are the main risks of AI misuse?
Data breaches, hallucinations, bias, legal violations, and loss of trust.

Q5. What happens if employees break the rules?
Consequences may include retraining, suspension of access, or termination depending on severity.

Q6. How often should the tool list be updated?
At least monthly, since AI products evolve quickly.

Q7. What does “People First, People Last” mean?
It means humans must frame the input and validate the output. AI is a support tool, not a replacement for judgment.