1. Policy Statement
Family Promise of Puget Sound (FPOPS) recognizes the potential of Generative Artificial Intelligence (AI) tools to enhance productivity, creativity, and efficiency in our operations. However, the use of AI tools also presents unique challenges related to data privacy, accuracy, intellectual property, and ethical considerations. This policy establishes guidelines for the responsible and authorized use of Generative AI tools by all staff, volunteers, and board members, restricting their use to FPOPS-approved AI agents and platforms, while clarifying monitoring rights and emphasizing data management best practices. Our aim is to leverage AI responsibly to further our mission while safeguarding our individuals, our data, and our organizational integrity.
2. Purpose
The purpose of this policy is to:
* Provide clear guidelines for the ethical and responsible use of Generative AI tools within FPOPS.
* Protect the privacy and confidentiality of individual, staff, volunteer, and organizational data.
* Ensure the accuracy and reliability of information generated or processed using AI tools.
* Safeguard FPOPS’s intellectual property and reputation.
* Clarify FPOPS’s monitoring rights regarding AI tool usage.
* Promote efficient and effective use of approved AI technologies.
* Mitigate legal and ethical risks associated with AI adoption.
3. Scope
This policy applies to all paid employees, unpaid staff, volunteers, and board members of Family Promise of Puget Sound. It covers the use of any Generative AI tool or platform, whether FPOPS-provided or personally accessed, when used for FPOPS business or when FPOPS-related information is input into such tools.
4. Guiding Principles
* Responsible Innovation: Embrace AI tools that can enhance our work, but always with a focus on ethical implications and responsible implementation.
* Data Protection: Prioritize the privacy and security of all data, especially confidential individual information, when using AI tools.
* Accuracy and Verification: Recognize that AI-generated content may not always be accurate or unbiased; users are responsible for verifying all AI output.
* Transparency and Disclosure: Be transparent about the use of AI in our processes where it impacts individuals or external stakeholders.
* Human Oversight: AI tools are intended to assist, not replace, human judgment and decision-making.
* Compliance: Adhere to all FPOPS policies (e.g., Confidentiality, Electronic Communications, Anti-Harassment) and all applicable laws and regulations.
* Accountability: Individuals are accountable for their use of AI tools and the content they generate or input.
5. Definitions
* Generative AI (GenAI): Artificial intelligence systems capable of generating new content, such as text, images, audio, or code, in response to prompts or inputs. Examples include large language models (LLMs), text-to-image generators, etc.
* FPOPS-Approved AI Agent/Platform: Specific Generative AI tools or platforms that have been officially reviewed, vetted, and authorized by FPOPS leadership for use in organizational business. These platforms will have established data privacy agreements and security protocols.
* Input/Prompt: Any text, data, or information entered by a user into a Generative AI tool.
* Output/Generation: Any text, image, or other content produced by a Generative AI tool in response to an input.
* Confidential Information: As defined in FPOPS’s Confidentiality Policy (FPPS-CP-001), including individual personal data, internal operational details, financial records, personnel information, and unreleased strategic plans.
6. Policy Guidelines and Procedures
6.1. Authorized Use of Generative AI Tools
* Approved Agents Only: The use of Generative AI tools for FPOPS business is strictly limited to FPOPS-approved AI agents and platforms. A list of approved tools will be maintained and communicated by the CEO or designated IT/Operations lead.
* No Unauthorized Tools: Employees, volunteers, and board members are prohibited from inputting FPOPS-related information, confidential data, or individual-specific details into any Generative AI tool or platform that is not explicitly FPOPS-approved. This includes publicly available AI tools (e.g., free versions of ChatGPT, Bard, Midjourney) that do not offer enterprise-level data privacy agreements.
* Purpose-Driven Use: Approved AI tools should only be used for legitimate FPOPS business purposes that align with our mission and operational goals.
6.2. Data Management and Privacy
* No Confidential Data Input: Under no circumstances should any confidential individual information (e.g., names, addresses, case details, health information), sensitive employee/volunteer data, or FPOPS proprietary information be input into any Generative AI tool, even FPOPS-approved ones, unless explicitly designed and approved for secure handling of such data.
* Anonymization/De-identification: If using AI for general insights or analysis of aggregated data, ensure all data is fully anonymized or de-identified before input.
* Data Retention: Be aware that data input into AI tools, even approved ones, may be retained by the AI provider. Understand the data retention policies of FPOPS-approved tools.
* Compliance with Confidentiality Policy: All use of AI tools must strictly adhere to FPOPS’s Confidentiality Policy (FPPS-CP-001).
6.3. Accuracy and Verification of AI Output
* Human Review Required: All outputs generated by Generative AI tools must be thoroughly reviewed, fact-checked, and edited by a human user before being used, published, or shared externally.
* No Blind Trust: Users must not blindly trust AI-generated content. AI models can “hallucinate” (produce false information), reflect biases present in their training data, or generate misleading content.
* Responsibility for Output: The individual using the AI tool is ultimately responsible for the accuracy, appropriateness, and compliance of any AI-generated content they use or disseminate.
6.4. Professionalism and Intellectual Property
* Professional Conduct: All interactions with AI tools and the use of their output must align with FPOPS’s Electronic Communications Policy (FPPS-EC-001) and Social Media Policy (FPPS-SM-001).
* Intellectual Property: Be mindful of copyright and intellectual property when using AI tools. Do not input copyrighted material without permission, and understand that the ownership of AI-generated content can be complex. Do not use AI to generate content that infringes on others’ intellectual property.
* Attribution (if applicable): If AI is used to create content that is publicly shared, consider transparently disclosing the role of AI in its creation where appropriate and beneficial for clarity.
6.5. Monitoring and Compliance
* Monitoring Rights: FPOPS reserves the right to monitor the use of FPOPS-approved AI agents and platforms, as well as any data input into or generated by them, to ensure compliance with this policy and other FPOPS policies. This is consistent with FPOPS’s monitoring rights outlined in the Electronic Communications Policy (FPPS-EC-001).
* Audits: FPOPS may conduct periodic audits of AI tool usage to ensure adherence to this policy.
7. Consequences of Violations
Any violation of this Generative AI Policy, particularly unauthorized use of AI tools or the input of confidential data into unapproved platforms, may result in disciplinary action, up to and including:
* Verbal or written warning.
* Suspension of access to FPOPS systems and AI tools.
* Suspension from duties.
* Termination of employment or service.
* Reporting to law enforcement, if applicable, for severe data breaches or illegal activities.
The severity of disciplinary action will depend on the nature and severity of the violation, potential harm caused, and prior incidents.
8. Responsibilities
* All Staff, Volunteers, and Board Members: Responsible for understanding and strictly adhering to this policy, using only FPOPS-approved AI tools, and verifying all AI output.
* CEO/IT/Operations Lead: Responsible for identifying, vetting, and approving Generative AI tools for FPOPS use, maintaining a list of approved tools, and providing guidance and training.
* Supervisors: Responsible for communicating this policy to their teams, ensuring appropriate use of AI tools, and addressing any observed violations.
* Data Privacy Officer (if applicable): Responsible for advising on data privacy and security implications of AI tool usage.
9. Policy Review and Revision
This policy will be reviewed annually by the CEO and Board of Directors, or more frequently as needed, to ensure its continued effectiveness and alignment with rapid advancements in AI technology, evolving legal and ethical landscapes, and the changing needs of Family Promise of Puget Sound. Any revisions will be communicated to all relevant personnel.