Related Policies and Considerations


Ethics, Honesty, Accuracy, and Accountability

AI-generated content can be inaccurate, biased, or entirely fabricated (sometimes referred to as “hallucinations”). Using AI at FIT requires integrity, accuracy, and accountability. AI must not be used to mislead, impersonate others, or produce biased, discriminatory, or harmful content. All AI-assisted material must be carefully reviewed for accuracy and ethical implications; relying solely on AI is not an excuse for errors or misrepresentations. Ultimately, the AI user is fully responsible for the output.

  • AI-assisted content cannot be presented as original work without appropriate acknowledgment or citation.
  • AI-assisted content is expected to be reviewed for accuracy before use. If a reliable source cannot verify factual information, it should not be used.
  • Users are expected to disclose when AI-assisted content is used in official FIT communications, reports, or publications.
  • AI should not be used to impersonate any individual, including FIT employees, students, or external partners.
  • Users are fully responsible for AI-assisted content, ensuring it aligns with FIT’s mission, values, and policies.

Related Policies:

Data Privacy and Information Security

Protecting data privacy and information security is critical when using AI. Users must never input non-public information, including confidential information, PII, or FIT’s financial data into AI tools. AI-generated content can also be used in phishing or social engineering attempts, so vigilance and prompt reporting of suspicious activity are essential.

  • AI tools may be used with publicly available data or information classified as Public Information under FIT’s Information Security policy. All usage must comply with this policy and FIT’s Acceptable Use policy.
  • FIT data, including confidential, proprietary, or personally identifiable information (PII), may not be entered, uploaded, or submitted to any AI tool.
  • AI tools must not be used to process or store legally protected information, including but not limited to student records (FERPA), employee records, or financial data.
  • Users must respect IP rights to safeguard those rights. It is the responsibility of individual users to ensure that both their inputs and outputs from AI tools are properly protected against issues like copyright, patent laws, data privacy regulations, and identity theft.Treat email content as private. Copying emails or internal messages into AI tools can expose sensitive information like PII, student records, or confidential communications. 
  • Users should report any suspicious AI activity. For example, if you encounter AI-generated phishing attempts, misleading content, or security concerns, report them to IT immediately.
  • Under no circumstances should users connect AI tools to platforms that contain sensitive information. 
  • Users should not share sensitive, confidential, strategic, or internal FIT information with AI tools, given that use of many AI tools results in the licensing of the entered content to the AI vendor. 

Related Policies:

Bias Awareness

AI-generated content may reflect or reinforce biases present in its training data. It’s important to critically assess outputs for stereotypes, exclusionary language, or assumptions that could impact fairness, equity, or representation. Promoting responsible AI use means staying aware of how content may affect diverse audiences and taking steps to ensure outputs are inclusive, respectful, and free from bias.

  • AI technologies may not be used to create content that is inappropriate, discriminatory, misleading, or otherwise harmful.
  • Users must critically evaluate AI-generated content for potential bias, errors, or misleading information before acting on it or putting it into use.  The AI user remains fully responsible for the output and its unintended bias and hence must protect FIT from the usage of any biased output.
  • Faculty are strongly discouraged from using AI tools to assess, grade, or evaluate student work. Relying on AI for these purposes can introduce bias, compromise academic integrity, and undermine the trust that is essential to the student-faculty relationship. All evaluations should be conducted by qualified individuals who can fairly  and contextually assess the quality and intent of student submissions. 
  • AI must not be used to make hiring, admissions, or personnel decisions, as these require human judgment and compliance with anti-discrimination laws.

Related Policies:

Intellectual Property

Safeguarding intellectual property is critical when using AI tools. Users must avoid using AI to generate or replicate content that infringes on copyrights, trademarks, or proprietary ideas. Additionally, sharing FIT’s unpublished or confidential work with AI tools may expose it to misuse under those tools’ terms of service. Always document your creative process and retain records when AI is involved to help establish ownership and protect original work. Once you share something, you have little to no control over where it goes, or how it is used. When in doubt, keep it out.

  • Do not input unpublished work, proprietary ideas, or confidential projects into AI tools. Consider registering your original work before using AI tools to refine or expand on it.
  • If you must use AI for brainstorming, provide only general prompts rather than specific, original, content.
  • Review whether the AI claims ownership or rights usage over what you enter. Some jurisdictions may not grant copyright protection to AI-created work.
  • Ensure the tool does not store, retain, or use your inputs to inform its future models.
  • Use AI-generated content only as a reference - avoid copying and pasting directly. 
  • Store your original work in a secure, private, location rather than on an AI platform.
  • Use watermarks, copyright notices, or metadata to establish ownership and keep dated records, drafts, and backups of your work to prove originality.
  • Periodically check if your work appears in AI-generated content by searching for key phrases or using plagiarism detection tools. If you find unauthorized use, take action through copyright claims or legal channels. 

Content created by generative AI tools like text, images, or music cannot currently be copyrighted, because U.S. law requires a human author. While the rules may evolve, it’s important to know that generating a specific prompt for AI does not make you the copyright holder of the product it produces. Also, keep in mind: many AI tools are trained on copyrighted material, and legal questions around that use are still being debated. Always think critically about ownership and use rights when working with AI-generated content.

Related Policies:

Record Retention

AI-generated content must adhere to FIT’s Records Retention and Disposition policy and the retention schedule of the area where the content is created and used. AI tools can generate quick answers, but should not determine how long to keep or when to dispose of records. 

Records management requires human intervention to ensure compliance.  Always refer to the official records retention schedules and consult with FIT’s Records Management Officer to ensure compliance.  Retention requirements are governed by legal, regulatory, and college policies that AI may not interpret correctly. 

All records must be approved for destruction before they are destroyed. per FIT’s Records Retention and Disposition policy.