- Marketer In The Loop
- Posts
- Creating an AI Policy That People Will Actually Follow
Creating an AI Policy That People Will Actually Follow
DEEP DIVE
Creating an AI Policy That People Will Actually Follow
The Big Picture
Marketing teams are racing to adopt AI, with budgets following suit. According to AI at Wharton & GBK Collective's "Growing Up: Navigating Gen AI's Early Years Report," 84% of marketers are increasing their Gen AI investments in the coming year—27% by more than 10%, and 57% between 1-10%.
Yet there's a glaring disconnect between investment and governance. The same research shows only half of companies have basic data privacy policies for AI in place. Even among enterprises with $2B+ in revenue, just 63% have data privacy guardrails, and 54% maintain ethical guidelines. The rest are deploying powerful technology without clear boundaries or safety nets.
Question: What types of responsible AI policies does your organization have in place for Gen AI?
Source: “Growing Up: Navigating Gen AI's Early Years" - AI at Wharton & GBK Collective, Oct 2024
Why It Matters
This gap between AI adoption and governance isn't just a compliance risk—it sends mixed signals about leadership's understanding of AI's implications. When teams lack clear policies, they're left guessing about what's allowed, what's risky, and what's off-limits entirely.
This guide breaks down how to create an AI policy with practical advice, including a ready-to-use template and examples from top brands. If other teams in your organization are already driving AI transformation and working on policies, join their efforts. If nobody has taken the lead yet, it's time to step up and take action.
Why You Can't Put This Off Any Longer
Let's cut to the chase. You need an AI policy for four key reasons:
1. Active Usage: Your team is very likely already using AI tools—87% of marketers have experimented with them, and 68% use AI in their daily work, according to The Conference Board. They're making important decisions about data usage, content creation, and customer interactions based on their best judgment.
2. Resource Optimization: When teams adopt AI tools without coordination (and some employees do so without company consent), you end up with redundant subscriptions and inconsistent practices—leading to suboptimal results.
3. Legal Compliance: Between GDPR, CCPA, FTC, and governments' growing interest in AI practices, the risks of uncontrolled AI use are significant. This includes potential intellectual property infringement, inaccuracy, and bias concerns.
4. Trust Building: A transparent AI policy assures customers and partners of your commitment to responsible AI use, helping build trust and long-term relationships.
Your 30-Day Plan to Create an AI Policy
Creating an AI policy can feel overwhelming, but you don't need to figure everything out at once. Here's a practical, four-week plan to get your policy off the ground, focusing on what matters most.
Week 1: Understand Your Current State
What to do:
Create a 5-question survey about AI usage:
What AI tools are you currently using?
How often do you use them?
What tasks do you use them for?
What data do you input into these tools?
What challenges or concerns do you have?
Schedule 15-minute coffee chats with 3-4 power users
Create a simple spreadsheet to track
All AI tools currently in use
Monthly costs
Number of users
Primary use cases
Document top 3 challenges teams mention repeatedly
Expected outcome: A clear picture of your current AI landscape and key pain points.
Week 2: Build Your Foundation
What to do:
Identify your core team:
1-2 marketing team members who actively use AI
1 legal representative for compliance guidance
1 IT representative for security considerations
1 senior leader for strategic alignment
Host a 90-minute kickoff meeting to:
Share Week 1 findings
Define policy scope
Set timeline and milestones
Assign specific responsibilities
Create a shared document for collaboration
Set up weekly 30-minute check-ins
Expected outcome: Clear ownership, timeline, and working structure.
Week 3: Draft Your Policy
What to do:
Start with these five core elements:
1. Approved Tools and Use Cases
List approved tools
Define specific use cases
Create examples of what's allowed and what isn't
2. Data Guidelines
Define data categories
Set usage rules (i.e., privacy compliance measures)
Create decision tree for data sharing
3. Human Oversight Requirements
Identify high-risk areas
Create checklists per area
Set review processes
4. Compliance Requirements
Document legal requirements
Set audit procedures
Define documentation needs
5. Training Plan
Define basic AI literacy needs
Create use case and tool-specific guides
Plan ongoing education
Expected outcome: First draft of your policy with clear guidelines for each area.
Resources available:
|
|
Alternatively, you can use this GPT by Heather Murray that can guide you through the AI Policy creation.
Week 4: Test and Launch
What to do:
Select 3-5 team members for pilot testing
Create a feedback form covering:
Policy clarity
Practical challenges
Missing elements
Implementation concerns
Schedule 30-minute session with pilot team
Collect and incorporate feedback
Prepare launch materials:
One-page quick start guide
FAQ document
Training schedule
Support system details
Expected outcome: A tested, refined policy ready for team-wide rollout.
Making It Stick
Here's the hard truth—even the best policy is useless if people ignore it. Here's how to make yours work:
Keep it simple, clear, and practical
Start small and iterate—you can always expand later
Focus on enabling rather than restricting
Use real examples with quick reference guides for common scenarios
Set up an easy way for people to ask questions and provide feedback
The Bottom Line
Perfect is the enemy of good. Start with addressing your team's most pressing needs and build from there. Remember, your policy will evolve as AI usage matures.
Start simple, iterate often, and focus on what matters most: helping your company and team use AI effectively and safely.