Love Haight AI Policy

Last updated: January 10, 2026

Generative artificial intelligence is the use of large language models to create something new, such as text, images, graphics and interactive media. These terms will be referenced throughout this policy:

Generative AI: A type of artificial intelligence that creates new content, such as text, images, or media, by interpreting and generating based on input data.

Large language models (LLMs): AI systems trained on vast datasets of text to understand and generate human-like language, and is the information backbone that powers Generative AI.

AI prompt: A specific input or instruction provided to an AI tool to generate a desired output.

Hallucination: The phenomenon where AI generates information or responses that are fabricated, inaccurate, or not grounded in fact.

Training data: The dataset—articles, research papers or social media posts—used to teach an AI model patterns, relationships and knowledge for making predictions or generating content.

Although generative AI has the potential to improve newsgathering, it also has the potential to harm journalists’ credibility and our unique relationship with our audience. As a hyperlocal outlet covering the Haight-Ashbury, Western Addition, and Cole Valley neighborhoods, our readers trust us because we’re of this community. That trust is sacred.

As we proceed, the following five core values will guide our work. These principles apply to the newsroom and throughout other departments including advertising, events, marketing and development.

Transparency

When we use generative AI in a significant way in our journalism, we will document and describe to our audience the tools with specificity in a way that discloses and educates. This may be a short tagline, a caption or credit, or for something more substantial, an editor’s note. When appropriate, we will include the prompts that are fed into the model to generate the material.

Accuracy and Human Oversight

All information generated by AI requires human verification. Everything we publish will live up to our long-time standards of verification. An editor will review prompts and any other inputs used to generate substantial content, including data analysis, in addition to the editing process in place for all of our content.

We will actively monitor and address biases in AI-generated content, ensuring fairness and equity in our journalism. Our AI committee will regularly evaluate and update our standards to ensure uses and tools are equitable and minimize bias.

Privacy and Security

Our relationship with our audience is rooted in trust and respect. To that end, we will protect our audience’s data in accordance with our privacy policies. We will never enter sensitive or identifying information about our audience members, sources or our own staff into any generative AI tools.

As technology advances and opportunities to customize content for our audience arise, we will be explicit about how your data is collected and how it was used to personalize your experience. We will disclose any editorial content that has been created and distributed based on that personalization.

Accountability

We take responsibility for all content generated or informed by AI tools. Any errors or inaccuracies resulting from the use of these tools will be transparently addressed and corrected. We will regularly audit feedback forms and incorporate audience feedback into policy updates. Violations of this policy will require retraining and possible disciplinary action.

Exploration

With the previous principles as our foundation, we will embrace exploration and experimentation. Love Haight believes AI tools can help a small newsroom punch above its weight—covering more neighborhood meetings, translating stories for our multilingual community, and creating compelling visuals on a budget. We will invest in newsroom training so every staff member is knowledgeable about the responsible and ethical use of generative AI tools.

Logistics

The point person on generative AI at Love Haight is Jordan Reyes (Managing Editor), supported by AI committee members: Maya Chen (Editor-in-Chief), Devon Park (Audience Engagement Lead), and Luis Morales (Senior Reporter). Coordinate AI experimentation through Jordan via Slack (#ai-experiments) or in person.

The team will seek input from a variety of roles, particularly those who are directly reporting the news.

You should expect to hear at least monthly communication from this team with updates on what we are doing and guidance on what activities are generally approved.

In addition, members of this team will:

  • Monitor your content management systems, photo editing software and business software for updates that may include AI tools
  • Write clear guidance about how we will or will not use AI in content generation
  • Edit and finalize our AI policy and ensure it is publicly available on lovehaight.news/ai-policy
  • Seek input from our audience through surveys, focus groups and other feedback mechanisms
  • Manage all disclosures about partnerships, grant funding or licensing from AI companies
  • Understand our privacy policies and explain how they apply to AI and product development
  • Innovate ways to communicate with the audience to both educate them and gather data about their needs and concerns

All uses of AI should start with journalism-centered intentions and be cleared by the AI committee. Human verification and supervision is essential. Before starting a new AI experiment, post in #ai-experiments with:

  • How do you want to use AI?
  • What is the journalistic purpose of this work?
  • How will you fact-check the results?
  • Will any material be published?
  • Which journalists will be responsible for overseeing this work?
  • What are the risks (hallucinations, copyright issues, privacy violations)?
  • What safety nets can you devise to intervene before negative outcomes?

Editorial Use

Approved Generative AI Tools

Here is a list of tools currently approved for use at Love Haight. Reach out to Jordan with any new tools you’d like to start using, and we can update the list pending review.

Text & Research:

  • ChatGPT
  • Claude
  • Perplexity
  • NotebookLM
  • Google Pinpoint

Audio/Transcription:

  • Otter.ai
  • Fireflies.ai
  • ElevenLabs (for approved voice cloning—see below)

Visual:

  • Adobe Firefly
  • Midjourney
  • DALL-E 3
  • Canva Magic Studio

Productivity:

  • Apple Intelligence
  • Existing tools (Zoom, Google Docs, etc.) that have added AI capabilities

Entering our content: You may enter Love Haight content into approved LLMs for research, summarization, and editing assistance. Do not enter unpublished scoops or sensitive source information.

We encourage the use of generative AI to improve efficiency and automate routine tasks. In upholding our five principles, these caveats apply:

  • Preserve our editorial voice: We will be cautious when using AI tools to edit content, ensuring that any changes maintain Love Haight’s neighborhood-focused, conversational voice
  • Avoid full writes and rewrites: Generative AI tools will not be used for wholesale writing or rewriting of content. We will use them for specific edits rather than rewriting entire paragraphs or articles
  • Proprietary content: We will not input any private or proprietary information, such as contracts, email lists or sensitive correspondence into generative AI tools
  • Verification: We will be mindful that generative AI tools may introduce errors, misinterpret context or suggest phrasing that unintentionally changes meaning, and will review all suggestions critically to ensure accuracy
  • Disclosure: In most cases, we will disclose the use of generative AI (see exceptions below)

Research

We may use generative AI to research a topic. This includes using chatbots to summarize academic papers, surface historical information about the Haight (our neighborhood has a lot of it), find city planning documents, and suggest story angles. A reminder: These tools are prone to factual errors, so all outputs will be verified by reporters and editors.

Transcription

We may use generative AI to transcribe interviews and neighborhood meetings, making our reporting more efficient. Our journalists will review transcriptions and cross-check with recordings for any material to be used in articles or other content.

Translation

San Francisco is a multilingual city. We may use generative AI tools to translate material for article research. We may also use those tools to translate article content to reach new audiences in Spanish, Cantonese, and other languages spoken in our coverage area. Translations will always be reviewed by a fluent speaker and include the following disclosure:

This article was translated using generative AI to reach our multilingual community. It has been reviewed for accuracy. Read our AI policy. Send feedback.

Searching and Assembling Data

We may use AI to search for information, mine public databases (SF Open Data, city planning records, police reports) or assemble and calculate statistics useful to our reporting. Any data analysis and writing of code will be checked by an editor with relevant data skills.

Headlines and SEO

Our journalists and editors may use generative AI tools to generate headlines or copy to help our content appear more prominently in search engines. We will put enough facts into the prompt that the headline is based on our journalism.

Summary Paragraphs and Repackaged Content

If you want to add summary bullets at the top of an article—or in other formats across the website—you can use a generative AI tool to do so. For summaries, use the following disclosure:

This summary was generated with AI to give readers a quick overview. It has been reviewed for accuracy. Read our AI policy.

Copyediting

Generative AI may be used as a tool to assist with copyediting tasks, such as identifying grammar issues, suggesting style improvements or rephrasing sentences for clarity.

Social Media Content

Generative AI tools can be used to summarize articles to create social media posts. To avoid label fatigue, we do not require disclosure labels for AI-assisted social media posts, as long as a producer reviews content and we link to this policy in our social bios. Audience teams should do regular content audits to ensure social copy meets ethical guidelines.

Visuals

Love Haight holds AI-generated visuals to the same rigorous ethical standards as all forms of journalism. Because images shape perception instantly and powerfully, our use of generative AI in visual storytelling is governed by principles of truth, transparency and audience trust.

These guidelines apply to all AI-generated or AI-assisted visual materials, including illustrations, composites, animations and enhanced photographs.

Humanity First

When a scene can be documented ethically and accurately by our journalists, human coverage is the preferred option. This is especially true for community events, local businesses, and neighborhood character—our readers want to see the real Haight.

AI-generated visuals may be used when:

  • They are essential to the audience’s understanding
  • The image is impossible or inappropriate to obtain through traditional means
  • Example: An illustration showing proposed changes to the Haight Street corridor based on city planning documents

Accuracy Over Aesthetics

AI photo enhancement tools (sharpening, lighting correction, denoising) must reflect reality, not dramatize or distort it. Edits that exaggerate emotion, alter mood or misrepresent the scene violate visual ethics.

Illustrations and Graphics

We permit AI-generated illustrations and graphics for editorial purposes when they are clearly presented as illustrations (not photographs) and serve the story. This includes:

  • Conceptual illustrations for opinion pieces and explainers
  • Data visualizations and charts
  • Illustrated maps and diagrams
  • Creative graphics for newsletters and social media

All AI-generated illustrations must be labeled: “Illustration created with AI” or similar.

No Manipulation of Real People or Events

We do not use AI to create or alter depictions of real people unless clearly disclosed and editorially justified. This includes recreating faces, changing expressions, or adding or removing individuals from scenes.

Disclosure

AI-generated illustrations or composites must be clearly labeled. Captions should disclose the method and source of generation. Example:

This illustration was generated using Midjourney based on architectural renderings from SF Planning. It is a visual approximation intended to help readers understand the proposed development. Read our AI policy.

Audio and Voice

Reporter Voice Cloning for Multilingual Content

Love Haight permits the use of AI voice cloning to create multilingual audio versions of stories in our reporters’ voices, subject to the following requirements:

  • The reporter must give explicit written consent for their voice to be cloned
  • Voice clones may only be used to read translations of that reporter’s own stories
  • All AI-generated audio must be clearly labeled: “Audio generated with AI voice technology”
  • Original English audio, when available, should also be offered
  • Voice clones will not be used for breaking news or sensitive stories without additional editorial review

Why we do this: Over 40% of Haight-Ashbury residents speak a language other than English at home. Audio journalism in Spanish and Cantonese dramatically expands access to our reporting. We believe—with proper consent and disclosure—this serves our community better than the alternative (no audio at all in these languages).

Note: This policy is experimental and will be reviewed after six months. We welcome community feedback at lovehaight.news/contact.

Product Development

Love Haight recognizes that AI-driven personalization and product tools shape how audiences discover, understand, and engage with journalism. We treat these systems with the same ethical rigor we apply to our reporting.

Human-in-the-Loop

AI tools will be reviewed during the development process by editors and the AI committee.

Inclusive Design and Bias Mitigation

All AI product tools must be tested for differential performance across topics and audience segments. Devon (Audience Engagement) conducts testing and mitigation.

Transparency and User Control

Audiences must be informed about AI-driven personalization. Include language such as:

“This feature uses AI to summarize content. It was reviewed by our editorial team. View our AI policy.”

Privacy is Non-Negotiable

All data used for personalization must comply with Love Haight’s privacy policy. We do not sell or share reader data with AI companies.

Environmental Impact

Love Haight acknowledges the energy demands associated with training and deploying large-scale AI systems. As part of our commitment to sustainable journalism, we recognize that responsible AI use includes minimizing our environmental footprint.

We commit to:

  • Prioritizing efficient tools
  • Advocating transparency from vendors and AI companies
  • Considering environmental impact when evaluating new AI tools

Audience AI Literacy

Along with this AI policy, we have developed an AI literacy page to help our audience understand how and why we’re using generative AI. This material will be regularly updated. The page will:

  • Help our audience understand the basics of generative AI
  • Explain why newsrooms use AI in their work
  • Build vocabulary for describing AI
  • Help readers avoid AI-generated misinformation
  • Encourage responsible use of chatbots

Updates

January 10, 2026

  • Added voice cloning guidelines for multilingual audio
  • Established social media disclosure exception
  • Added visual AI guidelines permitting illustrations

July 16, 2025

  • Initial policy publication