Skip to Main Content
The Student News Site of Ithaca College

The Ithacan

59° Ithaca, NY
The Student News Site of Ithaca College

The Ithacan

The Student News Site of Ithaca College

The Ithacan

AI policy

Generative artificial intelligence is the use of large language models to create something new, such as text, images, graphics and interactive media. These terms will be referenced throughout this policy:

Generative AI: A type of artificial intelligence that creates new content, such as text, images, or media, by interpreting and generating based on input data.

Large language models (LLMs): AI systems trained on vast datasets of text to understand and generate human-like language, and is the information backbone that powers Generative AI.

AI prompt: A specific input or instruction provided to an AI tool to generate a desired output.

Hallucination: The phenomenon where AI generates information or responses that are fabricated, inaccurate, or not grounded in fact.

Training data: The dataset — articles, research papers or social media posts — used to teach an AI model patterns, relationships and knowledge for making predictions or generating content.

Although generative AI has the potential to improve newsgathering, it also has the potential to harm journalists’ credibility and our unique relationship with our audience.

As we proceed, the following five core values will guide our work. These principles apply explicitly to the newsroom and throughout other non-editorial departments, including advertising and development.

Transparency

When we use generative AI in a significant way in our journalism, we will document and describe to our audience the tools with specificity in a way that discloses and educates. This may be a short tagline, a caption or credit, or for something more substantial, like an editor’s note. When appropriate, we will include the prompts that are fed into the model to generate the material.

Accuracy and human oversight

All information generated by AI requires human verification. Everything we publish will live up to our long-time standards of verification. For example, an editor will review prompts and any other inputs used to generate substantial content, including data analysis, in addition to the editing process in place for all of our content.

We will actively monitor and address biases in content that has been informed in any way by AI, ensuring fairness and equity in our journalism. The student media adviser, editor-in-chief and managing editor will regularly evaluate and update our standards to ensure uses and tools are equitable and minimize bias.

As a general rule of thumb, The Ithacan currently uses AI tools only to assist in reporting and editing, such as transcription services. Content WILL NOT be generated by AI.

Privacy and security 

Our relationship with our audience is rooted in trust and respect. To that end, we will protect our audience’s data in accordance with our newsroom’s privacy policies, outlined in our handbook and policy manual: “Our relationship with our audience is rooted in trust and respect. To that end, as we utilize AI to customize content or develop products that alter the way content is delivered, we will protect our audience’s data in accordance with our newsroom’s privacy policies. Most privacy policies forbid entering sensitive or identifying information about users, sources or even our own staff into any generative AI tools.” 

We will never enter sensitive or identifying information about our audience members, sources or our own staff into any generative AI tools.

As technology advances and opportunities to customize content for our audience arise, we will be explicit about how your data is collected — in accordance with our organization’s privacy policy — and how it was used to personalize your experience.

We will disclose any editorial content that has been created and distributed based on that personalization.

Accountability 

We take responsibility for all content informed by AI tools. Any errors or inaccuracies resulting from the use of these tools will be transparently addressed and corrected. We will regularly audit feedback forms and incorporate audience feedback into policy updates. And violations of this policy will require retraining and possible disciplinary action.

Exploration

With the five previous principles as our foundation, we will embrace exploration and experimentation. We will strive to invest in newsroom training so every staff member is knowledgeable about the responsible and ethical use of generative AI tools.

Logistics

The point people on generative AI in our newsroom are the student media adviser, editor-in-chief and managing editor. Coordinate all use of AI with the point people. This team will also be the source of frequent interim guidance distributed throughout our organization. 

The team will seek input from a variety of roles, particularly those who are directly reporting the news.

You should expect to hear at least monthly communication from this team with updates on what we are doing and guidance on what activities are generally approved.

In addition, members of this team will:

  • Monitor content management systems, photo editing software and business software for updates that may include AI tools. Because software changes quickly and AI is being added to nearly every technology product, it’s important to delegate appropriate team members to stay knowledgeable of updates.
  • Write clear guidance about how you will or will not use AI in content generation: AI WILL NOT be used to create content.
  • Edit and finalize our AI policy and ensure that it is both internally available and where appropriate, publicly available (with our other standards and ethics guidelines).
  • Seek input from our audience through surveys, focus groups and other feedback mechanisms.
  • Manage all disclosures about partnerships, grant funding or licensing from AI companies.
  • Understand our privacy policies and explain how they apply to AI and other product development. 
  • Innovate ways to communicate with the audience to both educate them and gather data about their needs and concerns.
  • Outline a clear process on how the policy will be updated.

All uses of AI should start with journalism-centered intentions and be cleared by the appropriate point people. Human verification and supervision is essential. Here’s the form you should use when requesting to use AI in your work for The Ithacan. Additionally, the section editors involved should be informed of the use.

Editorial use

Approved generative AI tools

Here is a list of tools that are currently approved for use at The Ithacan. Please reach out to the editor-in-chief with any new tools you’d like to start using, and we can update the list pending review.

Here are the generative AI sources we encourage you to use. We will expand this list as we learn more:

  • YESEO
  • Stylebot
  • WhisperAI (this implementation is encouraged for Mac users)
  • Otter.ai (paid version is preferred, particularly for sensitive interviews)
  • Grammarly (only to fix minor grammatical, style, syntax errors. This SHOULD NOT be used to fully rewrite sentences)
  • Source Diversity plug-in
  • Existing tools that we use (Zoom, Canva, Photoshop, etc.) that have added AI capabilities

Entering our content: Do NOT enter The Ithacan’s content into any large language models, with the exception of YESEO.

We encourage the use of generative AI to improve efficiency and automate routine tasks. In upholding the five principles of AI use in our organization, these caveats apply:

  • Preserve our editorial voice: We will be cautious when using AI tools to edit content, ensuring that any changes maintain Poynter’s editorial voice and style guidelines.
  • Avoid full writes and rewrites: Generative AI tools will not be used for wholesale writing or rewriting of content. We will use them for specific edits rather than rewriting entire paragraphs or articles.
  • Proprietary content: We will not input any private or proprietary information, such as contracts, email lists or sensitive correspondence into generative AI tools.
  • Verification: We will be mindful that generative AI tools may introduce errors, misinterpret context or suggest phrasing that unintentionally changes meaning, and will review all suggestions critically to ensure accuracy.
  • Disclosure: In most cases, we will disclose the use of generative AI. Our goal is to be specific and highlight why we’re using the tool to better engage with readers.

Research

We may use generative AI to research a topic. This includes using chatbots to summarize academic papers and suggest others, surface historical information or data about the topic and suggest story angles. Generative AI tools may be used by fact-checkers to find checkable claims to pursue, or by journalists to sift through social media posts for article topics. A reminder: these tools are prone to factual errors, so all outputs will be verified by reporters, editors, copy editors and proofreaders.

Generative AI tools may also be used to assist in formulating interview questions and developing story pitches, but this should never be the first step in the process. As with all other uses of AI, the human element is key in verifying outputs.

Transcription

We may use generative AI to transcribe interviews and make our reporting more efficient. Our journalists will review transcriptions and cross-check with recordings for any material to be used in articles or other content.

Translation

We may use generative AI tools to translate material for article research. We may also use those tools to translate article content to reach new audiences, which will always be reviewed by an expert in the language and include the following disclosure: This article/audio/video was translated using generative AI to be able to reach new audiences. It has been reviewed by our editorial team to ensure accuracy. Read more about how and why we use AI in our reporting. Send feedback.

Searching and assembling data

We may use AI to search for information, mine public databases or assemble and calculate statistics that would be useful to our reporting and in the service of our audience. Any data analysis and writing of code used on the website will be checked by an editor with relevant data skills.

Headlines or search engine optimization

Our journalists and editors may use generative AI tools to generate headlines to help our content appear more prominently in search engines. We will put enough facts into the prompt that the headline is based on our journalism and not other reporting. Our preferred tool for this is YESEO, which uses GPT.

Copyediting

Generative AI may be used as a tool to assist with copyediting tasks, such as identifying grammar issues, suggesting style improvements or rephrasing sentences for clarity.

Visuals

The Ithacan holds AI-generated visuals to the same rigorous ethical standards as all forms of journalism. Because images shape perception instantly and powerfully, our use of generative AI in visual storytelling is governed by principles of truth, transparency and audience trust. With that said, The Ithacan will not use AI tools to generate visuals. However, The Ithacan may use user-generated AI visuals if it’s a critical component to the story. For example, if reporting on a story about a class using AI tools to generate images, The Ithacan may use those courtesy/handout images to aid in the delivery of the story. Within the newsroom, these tools will only be used for editing.

These guidelines apply to all AI-assisted visual materials, including illustrations, composites, animations and enhanced photographs. Every visual must serve a clear editorial purpose and uphold our responsibility to inform, not mislead.

Accuracy over aesthetics

AI photo enhancement tools (e.g., sharpening, lighting correction, denoising) must reflect reality, not dramatize or distort it. Follow AP’s guidelines. Edits that exaggerate emotion, alter mood or misrepresent the scene violate visual ethics. For example, deepening shadows to heighten drama in disaster imagery is not permitted. All enhancements must be disclosed internally and reviewed against the original.

Review and verification

Given the rise of AI generation tools for the public, editors and journalists must be vigilant about analyzing reader-submitted content. Media verification must rely on multiple methods — metadata checks, source verification, AI-assisted forensics — and never on one tool. Verification decisions must be documented internally for future review and accountability.

No manipulation of real people or events
We do not use AI to alter depictions of real people or places unless clearly disclosed and editorially justified. This includes recreating faces, changing expressions, or adding or removing individuals from scenes. We WILL NOT use AI to simulate likenesses of staff or sources in news reporting.

Disclosure

AI-generated illustrations or composites provided by audience members or sources must be clearly labeled. Captions should disclose the method and source of generation. In addition to this disclosure, the use should include this note: This illustration has been reviewed by our editorial team for accuracy. Read more about how we use AI in our reporting.

Product development

While The Ithacan does not currently have a formal product development team, members of the editorial board are involved in product decisions. In the event that The Ithacan does start developing internal tools, we will use the guiding principles below in shaping those tools.

The Ithacan recognizes that AI-driven personalization and product tools shape how audiences discover, understand and engage with journalism. We treat these systems with the same ethical rigor we apply to our reporting: prioritizing transparency, fairness, human oversight and audience trust.

This section applies to all AI used in product design, including chatbots, recommendation engines, search assistants and personalization algorithms. All tools must serve the public interest — not just engagement metrics.

Human-in-the-loop

AI tools will be reviewed during the development process by editors and product leaders. The editor-in-chief always has final approval.

Inclusive design and bias mitigation

All AI product tools must be tested for differential performance and exposure across topics and audience segments. The community outreach manager is responsible for testing and ensuring that underserved communities are not being misrepresented. The Ithacan also may consult the Board of Publications for guidance.

Audience education

AI tools must help audience members understand complexity, not just consume more content.

Clear off-ramps and fallback options

Every AI-powered feature must include a way for users to access full content or request human help.

Comment moderation requires transparency

AI comment tools, if used, will follow the newsroom’s code of conduct and moderation guidelines. Our comment policy is outlined in our community guide.

Privacy is non-negotiable

Any data used for personalization must comply with The Ithacan’s privacy policy.

Open development and accountability

All AI product features will be documented with:

  • A named point person
  • Development goals
  • Stakeholder consultation (journalists, technologists, ethics leads, audience members)
  • Correction and error response process

Public service over product novelty

AI tools must meet a specific user need that aligns with our mission. Any proposed tool will be reviewed by the editorial board, with the editor-in-chief having final say in implementation.

Ongoing training

We will do our best to implement regular training on AI tools. This training will be delivered or facilitated by the student media adviser and outside experts.

Environmental impact

The Ithacan acknowledges the energy demands associated with training and deploying large-scale AI systems. As part of our commitment to sustainable journalism, we recognize that responsible AI use includes minimizing our environmental footprint.

We commit to:

  • Prioritizing efficient tools
  • Advocating transparency from vendors and AI companies
  • Offsetting responsibly

Commitment to audience AI literacy

Along with this AI policy, we plan to develop an AI literacy page to help our audience understand how and why we’re using generative AI. This material will be regularly updated to reflect our most current experimentation. As our language evolves, we will be better able to describe specific AI applications and tools. On the page, we link to resources, articles and other materials to:

  • Help our audience understand the basics of generative AI.
  • Explain why newsrooms use AI in their work.
  • Build a more robust vocabulary for describing AI.
  • Avoid AI-generated misinformation.
  • Use chatbots responsibly to seek out factual information.
  • And create responsibly using new generative AI tools.

Some helpful AI in journalism resources:

Last updated Fall 2025

Donate to The Ithacan
$215
$4000
Contributed
Our Goal