Artificial Intelligence (AI) Principles & Policies
Last Updated: December 15, 2025
These policies have been developed after consulting foundational documentation on ethical AI principles from organizations including Google, IBM, Microsoft, OCED (Organization for Economic Co-operation and Development), the European Commission, AI Now Institute (NYU-affiliated), and TNPA (The NonProfit Alliance).
Our goal is to outline a set of AI principles, policies and sample use cases that emphasize the responsible and ethical use of AI technologies as it applies to our business, clients, and work products.
Principles:
At Further we’re committed to responsibly utilizing and employing AI technologies while keeping humans at the center and ensuring our usage of AI is guided by our principles. We’ve distilled our commitment into six core values that comprise our foundation:
- Fairness
- Reliability and safety
- Privacy and security
- Transparency
- Accountability
- Inclusiveness
Definitions:
Generative AI – Artificial intelligence that creates new content such as text, images, audio, or video based on patterns it has learned from existing data.
Predictive AI – Artificial intelligence that analyzes historical and real-time data to forecast future outcomes, such as user behavior or campaign performance.
Analytical AI – Artificial intelligence that examines large datasets to detect patterns, explain trends, predict outcomes, and reactively provide data-driven insights for decision-making.
Agentic AI – Artificial intelligence that acts autonomously to achieve complex goals, reasoning, planning, and taking action with minimal human input.
Policies:
Further acknowledges the many benefits that can be derived from using AI and we strongly encourage our employees to devote time each week to exploring, testing, familiarizing, and evaluating various AI tools. However all use of AI is subject to the following policies.
Privacy and Security – All uses must comply with Further’s Data Security & Confidentiality Protocols. Because AI models rely so heavily on data to develop and train the algorithm, data privacy concerns can be one of the biggest risks when using AI. We are entrusted with our clients’ data and are contractually bound to limit use of their data to instances where we can ensure privacy, security, reliability and safety. In all but the rarest cases, Further will not accept Personally Identifiable Information (PII) from clients. Files containing PII will be rejected by Further unless the use case has been specifically approved by both parties.
Despite these precautions, the more data we put into AI, the more likely sensitive information will slip through the cracks. To prioritize data privacy, Further is committed to the following:
- Limiting access to sensitive information
- Requiring explicit permission to input client and/or company (Further) data into AI models
- Instructing employees with access to sensitive data to comply with the company’s Data Security & Confidentiality Protocols
Further will NOT load confidential client or company (Further) data into free or open-source versions of any AI tool. All AI tools for which Further loads sensitive client data must be private and will only be done with explicit approval from the client.
Clearly many advertising platforms that Further utilizes employ their own versions of AI and Further, after gaining approval from the client for loading data to the platforms, disavows any responsibility for the AI principles and procedures of these 3rd party platforms. We will, however, do our best to remain knowledgeable and educate clients on how these platforms use AI.
Transparency – Further endeavors to be highly transparent in our use of AI. We will notify clients that our policy allows AI to be used to generate content that may be used in advertising. We are also willing to restrict or limit our use of AI based on client-specific rules or preferences.
Fairness, Accountability, Inclusiveness – Further understands that AI is subject to error, poor judgement, bias, inaccuracies, and ‘hallucinations’. To this end we require that all outputs of AI are reviewed by a live human before being released to other parties. Employees are accountable for the accuracy and appropriateness of all content or work product for which they choose to employ AI. We do not require employees to notify clients of which AI tools were used for competitive reasons.
The human review of AI content is designed to confirm information for accuracy and double-check content for any bias.
Rules for AI Usage
FURTHER EMPLOYEES MUST NOT:
- Rely on AI technology for decision-making. Human oversight is essential. Any AI-generated output should be reviewed carefully to ensure it is fair, accurate, and unbiased.
- Input any Personal Data* into an AI system. This includes any Personal Data* that may relate to our organization’s clients, staff, client donors, and any other individuals whose data we collect or process.
- Input any confidential or proprietary information of our organization or its clients, client donors or partners into any non-secure AI system or instruct an AI system to generate outputs that contain confidential or proprietary information.
- Instruct any AI system to generate any Personal Data* pertaining to any of our organization’s clients, staff, client donors, or any other individuals. In the event that any Personal Data* concerning an individual or group of individuals is generated by the application as a by-product, do not save such information.
- Download or install any AI technology on our organization’s systems without prior approval from Management. Further’s IT consultant will conduct a thorough risk assessment prior to approving the use or deployment of any AI technology.
*Personal Data means and includes all information regarding or reasonably capable of being associated with an identified or identifiable individual or household, or information (including sensitive information) that can directly or indirectly identify a natural person, whether or not they are named and whether or not the data contains a unique identifier. Personal Data may relate to our organization’s clients, staff, clients’ donors, and any other individuals whose data we collect or process.
Permitted Use Cases:
Generative AI – The following guidelines define how Generative AI tools may be used to support marketing and data activities, ensuring we gain efficiency and creativity while protecting privacy, accuracy, and brand integrity.
Generative AI Examples:
Idea Generation
- Use AI to brainstorm campaign concepts, content angles, subject lines, donor engagement ideas, and testing hypotheses.
- All suggestions must be reviewed for brand fit, factual accuracy, and mission alignment before use.
Machine Learning & Data Modeling
- Use AI/ML platforms to analyze campaign data, build predictive models (e.g., churn risk, upgrade likelihood), and surface actionable insights.
- Only use secure, approved platforms; never upload PII or proprietary data to public tools.
- Models must use ethically sourced, consented data, be monitored for bias, and support—not replace—human decision-making.
Copywriting
- Use AI to draft marketing copy (emails, ads, social posts, landing pages) to speed production.
- All AI copy must be fact-checked, edited for tone and brand voice, and free of manipulation before publication.
Visual, Video & Audio Content Creation
- Use AI to create concept visuals, ad mockups, video storyboards, and audio drafts for early creative exploration.
- Final assets must be reviewed for brand consistency, inclusion, factual accuracy, and licensing compliance before public release.
- Disclose AI use when required by platform policy or ethical guidelines.
Presentation Development
- Use AI to outline slides, suggest layouts, and draft speaker notes or talking points.
- All content and data must be verified for accuracy, confidentiality, and brand alignment before sharing internally or externally.
Predictive AI – Artificial intelligence powers modern digital advertising platforms by predicting which audiences, creatives, and placements are most likely to drive results. Tools such as Google Performance Max, AI Max and Meta Advantage+ utilize machine learning to analyze user behavior and campaign data in real-time, automatically optimizing targeting, bidding, and creative delivery to maximize performance.
Predictive AI Examples:
Google Ads — Performance Max & AI Max
- Automated Audience Expansion: Predicts which users are most likely to convert beyond manually defined segments, using signals such as search intent, site visits, and purchase behavior.
- Creative Asset Selection: Uses AI to predict which headlines, descriptions, and images will perform best and automatically rotates combinations to maximize conversions.
- Conversion Value Rules & Smart Bidding: Predicts the likelihood and value of a conversion in real time, adjusting bids dynamically for higher ROI.
Meta (Facebook & Instagram) — Advantage+
- Advantage+ Audience Expansion: Dynamically predicts and finds additional high-value users outside of seed audiences, such as lookalikes or custom lists.
- Creative Optimization: Predicts the best ad variation to serve each user, rotating images, videos, and text for expected highest engagement or conversion.
Display & Programmatic DSPs
- Dynamic Creative Optimization (DCO): Predicts which creative variation will resonate most with each user segment and serves it in real time.
- Lookalike Modeling: Predicts audiences similar to converters or high-value customers, adjusting reach dynamically as performance data changes.
- Conversion Propensity Models: Predicts the likelihood of a user taking a desired action (donation, signup, purchase) and bids accordingly.
Analytical AI – Analytical AI refers to tools that examine large datasets to uncover trends, anomalies, and key performance drivers without creating new content. Within our data stack, platforms such as Google Analytics 4, Snowflake, and Tableau utilize AI to surface insights, forecast outcomes, and explain performance patterns, thereby supporting data-driven decision-making.
Analytical AI Examples:
Google Analytics 4 (GA4)
- Anomaly Detection: Automatically flags unusual traffic or conversion patterns.
- Automated Insights: Highlights key shifts in donor behavior or campaign performance without manual query building.
Snowflake (with AI/ML integrations)
- Automated Pattern Detection: Uses Snowflake’s native ML functions or integrations to identify trends in donor giving or campaign ROI.
- Data Quality & Anomaly Detection: Flags duplicate donor records, unusual giving spikes, or outlier transactions.
Tableau (with AI features such as Explain Data & Ask Data)
- Explain Data: Automatically identifies and explains factors driving outliers in campaign KPIs or donor metrics.
- Forecasting: Uses built-in time-series models to project giving or conversion trends based on historical data.
- Natural Language Queries (Ask Data): Lets users explore donor and campaign insights by typing plain-language questions (e.g., “Which campaigns drove the highest recurring gifts?”).
Agentic AI – At this time Further has no plans to employ Agentic AI and therefore no examples are provided. At this stage, Agentic AI runs counter to our principles of Human Involvement and Review of all AI outputs.
Prohibited Use Cases:
Whenever possible, Further attempts to defer to each client’s own AI policies. If no client guidance is provided, the following uses are strictly prohibited:
Sharing Confidential or Proprietary Data
- Uploading donor PII (e.g., names, emails, addresses, payment details) to public or consumer AI platforms.
- Entering client data, internal financials, campaign strategies, or other proprietary details into tools that use submissions to train public models.
Using Non-Secure or Unvetted Platforms
- Employing free or unapproved AI tools lacking enterprise-grade security, privacy controls, or clear data-handling policies.
- Bypassing IT/Legal review to test new AI tools with sensitive or regulated data.
Generating Misleading Content
- Creating deceptive donor messaging, or fabricated impact stories.
- Altering images or videos to misrepresent the organization’s work or beneficiaries.
Misrepresenting AI-Generated Work
- Presenting AI-created content, images, video, or decisions as fully human-produced when disclosure is required by law, platform policy, or widely adopted ethical standards.
Bypassing Consent or Privacy Requirements
- Collecting or using supporter data in AI-driven personalization without proper consent, opt-out controls, or compliance with regulations (GDPR, CCPA, etc.).
Violating Copyright or Licensing Rules
- Using AI to generate content (text, images, audio, video, code) that infringes on copyrights, trademarks, or third-party rights.
These principles and policies will evolve as the nature, applications, and best practices of Artificial Intelligence evolve both in the marketplace and in our industry. When our policies change, we will post an updated version of this document to our website and if we believe the changes are significant in their impact to clients, we will notify clients directly.