Skip to Main Content
The Student News Site of Ithaca College

The Ithacan

48° Ithaca, NY
The Student News Site of Ithaca College

The Ithacan

The Student News Site of Ithaca College

The Ithacan

AI policy

Generative artificial intelligence is the use of large language models to create something new, such as text, images, graphics and interactive media. Although generative AI has the potential to improve newsgathering, it also has the potential to harm journalists’ credibility and our unique relationship with our audience.

As we proceed, the following five core values will guide our work. These principles apply explicitly to the newsroom and throughout other non-editorial departments, including advertising. 

Transparency – both internal and external. 

Externally, as we use AI in our journalism, we will document and describe the tools with specificity. When AI tools influence audience-facing content, we will tell the audience in ways that both disclose and educate news consumers. We will work with editors and designers to create disclosures that are precise in language without being onerous to our audience. This may be a short tagline, a caption or credit, or for something more substantial, an editor’s note. When appropriate, we will include the prompts that are fed into the model to generate the material.

Our transparency works on multiple levels. Internally, it facilitates conversation and creativity. It will be clear to our peers whenever we are using generative AI. This will facilitate collective learning and help us create applicable, transitory policies as the technologies evolve. 

Externally, communication and disclosure ideally create opportunities to get feedback from the audience, as well as educate consumers. As journalists, part of our job is to empower the audience with news literacy skills. AI literacy – understanding how generative AI works, what benefits it brings to the information ecosystem and how to avoid AI-generated misinformation – is a subset of news literacy. 

Accuracy and human verification – All information generated by AI requires human verification. Everything we publish will live up to our standards of verification. Increasingly in all of our work, it is important to be explicit about how we know facts are facts. This will be particularly important when using AI. For example, an editor should review prompts, and any other inputs used to generate a story or other material. And, everything should be replicable.

Audience service – Our work in AI should be guided by what will be useful to our audience as we serve them. We have made a promise to our audience to provide them with information that informs and educates by bringing insightful and purposeful community journalism to the Ithaca College community and broader online audience.

Privacy and security  – Our relationship with our audience is rooted in trust and respect. To that end, as we utilize AI to customize content or develop products that alter the way content is delivered, we will protect our audience’s data in accordance with our newsroom’s privacy policies. Most privacy policies forbid entering sensitive or identifying information about users, sources or even our own staff into any generative AI tools. 

As technology advances and opportunities to customize content for users arise, we will be explicit about how your data is collected — in accordance with our organization’s privacy policy — and how it was used to personalize your experience.

Therefore, we will disclose any editorial content that has been created and distributed based on that personalization.

Exploration – With the four previous principles as our foundation, we will embrace exploration and experimentation. We should strive to invest in newsroom training — internal or external — so every staff member is knowledgeable in generative AI tools.

Logistics

The point person on generative AI in our newsroom is the student media adviser, in consultation with the editor-in-chief and managing editor. Coordinate all use of AI with them. They will also be the source of frequent interim guidance distributed throughout our organization. 

The leadership team will seek input from a variety of roles, particularly those who are directly reporting the news.

You should expect to hear regular communication from this team with updates on what we are doing and guidance on what activities are generally approved.

In addition, members of this team will:

  • Monitor your content management systems, photo editing software and business software for updates that may include AI tools. Because software changes quickly and AI is being added to nearly every technology product, it’s important to delegate appropriate team members to stay knowledgeable of updates.
  • Write clear guidance about how you will or will not use AI in content-generation.
  • Edit and finalize our AI policy and ensure that it is both internally available and where appropriate, publicly available (with our other standards and ethics guidelines).
  • Seek input from our audience, through surveys, focus groups and other feedback mechanisms.
  • Manage all disclosures about partnerships, grant funding or licensing from AI companies, as necessary.
  • Understand our privacy policies and explain how they apply to AI and other product development. This includes regularly consulting with editors, lawyers or other privacy experts that influence newsroom policies. 
  • Innovate ways to communicate with the audience to both educate them and gather data about their needs and concerns.
  • Outline a clear process on how the policy will be updated.

Current editorial use:

All uses of AI should start with journalism-centered intentions and cleared by the editor-in-chief. Human verification and supervision is key. Here’s the form you should use.

Tools to use, tools to avoid

This list will be updated periodically to include all of the AI tools we’re aware of

Here are the generative AI sources we encourage you to use. We will expand this list as we learn more:

Be mindful when using the following service(s):

  • Otter.ai (free version): There are privacy concerns with this, as this Verge report addresses, based on this longer first-hand account by a Politico journalist. Note: This is particularly discouraged for sensitive interviews.

Entering our content: Do NOT enter The Ithacan’s content into any large language models.

Editorial use:

Generative AI is generally permitted for the following purposes (but please still fill out the form, so we know what’s going on.)

Research – It’s fine to ask a publicly available large language model to research a topic. However, you’ll want to independently verify every fact. So be wary. It is fairly common for AI to  “hallucinate” information, including facts, biographical information, and even newspaper citations.

Headline experimentation – Asking AI to generate headlines is a form of research. The same caveats apply. Also, be sure to put enough facts into the prompt that the headline is based on our journalism and not other reporting. Headlines generated by an AI tool can be used, but it’s encouraged to ensure that the headline produced is the best quality from an editorial and SEO standpoint.

Summary paragraphs – Do NOT use AI to generate article summaries that appear on the homepage, i.e., the custom teaser. Our policy is that we do not enter our content into any large language models.

Searching and assembling data – You are permitted to use AI to search for information, mine public databases or assemble and calculate statistics that would be useful to our audience. Any data analysis should be checked by an editor.

Visuals – Do NOT use services to create illustrations for publication, unless there is a valid reason based on the content and the editor-in-chief approves the use. All illustrations must contain a clear credit stating that the visual was created using AI.

Do not use AI to manipulate photos unless they are for illustration purposes and clearly defined. Visual journalists need to be aware of software updates to photo processing tools to ensure AI-enhancement is being used according to our policies. Do not publish any reader submitted content without first verifying its authenticity.

Fact-checking

  • Use of AI alone is not sufficient for independent fact-checking. Facts should be checked against multiple authoritative sources that have been created, edited or curated by human beings. A single source is generally not sufficient; information should be checked against multiple sources. 

Social media use

Use of verbatim GPT content is NOT permitted on our social channels.  

  • Audience teams should do regular content audits to ensure social copy/posts meet ethical guidelines

Privacy and security

  • No personal information from our staff or audience should be entered into programs
  • None of our intellectual property should be entered into a program.
  • Staff working with AI tools should have a clear understanding of The Ithacan’s privacy policy.
  • When using generative AI to customize content for audience subsets, AI disclosure should include how user data was used to do so.
  • Wherever possible newsrooms should run the LLMs behind generative AI tools locally, meaning on hardware that you own, via tools like AnythingLLM or GPT4All.

Ongoing training

Regular training on AI tools and experiments will be available and at times even mandatory. This training will be delivered by a combination of members from The Ithacan and outside experts.

Creating custom GPTs 

All custom GPTs must be approved by the editor-in-chief. Know that the systems you develop with ChatGPT’s custom GPT program will not be private. Any custom GPT code should be publicly available. 

Use reliable sources to train custom GPTs. One of the best ways to create solid and useful output is to limit and control the sources a custom GPT draws on to material we can vouch for. In many cases this will mean limiting our custom GPTs to our own material.

Guidelines for web director and product-related team members

Our web director and product-centered team members are committed to understanding and staying up to date with all tools, software or companies we use or partner with. We will:

  • Vet third-party vendors and their usage policies before testing any AI product.
  • Make sure any product we use adheres to our own data and privacy policies.
  • Perform comprehensive testing on all software and tools for reliability and accuracy before using them for any consumer-facing content.
  • Ensure all software settings are correct, and in accordance with our policies, before using any LLM.
  • Keep up-to-date on the latest software updates for products we use.
  • Provide best-practices, documentation or training for new tools to internal users.

Some helpful AI in journalism resources:

Donate to The Ithacan
$720
$3000
Contributed
Our Goal