Summit Program | 2026 AI x Journalism (May 13–16)

Follow this page for updates as we build the program.

Summit Program | 2026 AI x Journalism (May 13–16)

Hacks/Hackers and our host partners are convening the second AI x Journalism Summit in Baltimore from May 13–16. Check this page for updates as we build the schedule.

The Summit schedule includes practical workshops, real-world case studies and collaborative sessions, all aimed at showing participants how to use AI to strengthen reporting, streamline workflows, create more impactful stories and build innovative news products.

There are four presentation tracks:

  • Preview
  • Play
  • Adopt
  • Govern

Preview

Discover breakthrough AI experiments that will transform journalism.

The Survey That Asks “One More Thing” (On Purpose)

Adaptive surveys use AI to ask “one more thing” on purpose, balancing structure with responsiveness to context. This session explores how a small set of fixed questions, paired with AI-guided follow-ups in natural language, helps researchers probe real constraints while staying on script and avoiding generic or leading prompts. Drawing on early field tests, including a rapid-response study during the Iran–Israel conflict internet shutdown and a WhatsApp-based project with Latino migrants in Philadelphia using WhatsApp, speakers share lessons on making audience research more locally relevant, especially in hard-to-reach communities.

  • Patrick Boehler, Founder, Gazzetta
  • Madison Karas, Lead, Service Design, Gazzetta

Using AI to Customize Content & Connect Readers

Discover how AI can help newsrooms foster healthier, cross-partisan conversations, calm heated comment threads, and better connect with diverse audiences. This session offers research-backed, practical tips for using AI to create more constructive social media posts, summarize online comments to reduce toxicity, and turn traditional story formats into easy-to-read content like bullet lists and Q&As. We’ll also explore how AI can help you tailor your stories to fit specific audiences, like younger readers.

  • Gina Masullo, Associated  Director, Center for Media Engagement, UT Austin

Play

Code, customize and deploy AI tools in hands-on working sessions.

Vibe coding jam session (All levels)

Bring your laptops and command line! This is a hand-on jam session where we will share what we're working on, tips and tricks for working with AI coding tools and see what we can build. Open to anyone experienced or if you just want to get started. If you haven't coded with AI agents yet, we will get you set up by the end of this session and ready to build.

A Decentralized Agentic-AI Editorial Board for an International Newsroom (All levels)

This session moves beyond chatbots to explore multi-agent AI workflows in an editorial context. Participants work in small groups to prototype a decentralized AI “editorial board,” where specialized agents take on roles such as fact-checker, ethics reviewer, and cultural advisor. Using a simulated cross-border editorial crisis, groups configure their agents to navigate legal, ethical, and cultural tensions facing global newsrooms. Attendees leave with a practical understanding of how agentic systems support editorial decision-making, where human judgment remains essential, and how decentralized AI workflows surface bias, objectivity, and narrative risks in international reporting.

  • Areeba Fatima, Technical Researcher, Columbia Graduate School of Journalism

Listening at scale: Building AI tools for audio and video monitoring (Intermediate)

Journalists are surrounded by audio and video they cannot keep up with, from civic meetings and police scanners to podcasts and livestreams. This session shows how multimodal AI systems listen and watch at scale, turning overwhelming streams into usable story leads. Drawing on tools built at Verso, the speaker demos a real-time police scanner monitor and a system analyzing how narratives spread on The Joe Rogan Experience. Attendees then build and adapt a simple video analysis workflow of their own, leaving with a working prototype they can apply to any beat.

  • Kaveh Waddell, Co-founder and principal, Verso

Next-Gen NotebookLM: 9 Bold and Impactful New Things Newsrooms Can Do (Intermediate)

AI works best in journalism when treated as an adversarial co-pilot, not a summarization engine. This workshop introduces nine research and analysis workflows using NotebookLM to interrogate documents, surface blind spots, and stress-test assumptions. Participants learn how to use critique-driven techniques, reverse-argument analysis, and perspective shifts to probe policy papers and large document sets while staying grounded in sources and citations. Attendees leave with a practical playbook for turning dense material into deeper, more rigorous reporting.

  • Jeremy Caplan, Director of Teaching and Learning, CUNY Newmark Graduate School of Journalism

AI Coding Agents for Investigative Journalism (Intermediate)

AI coding agents speed up investigative work only when configured for transparency, provenance, and defensibility. This session shows how to turn Claude Code from a fast but risky assistant into a methodical collaborator using domain-specific skills that document decisions and require human approval. Participants learn how configuration changes workflows, how to encode editorial standards into prompts, and when AI agents help or hurt investigative reporting.

  • Nick Hagar, Postdoctoral Scholar, Northwestern University 

Using MCP to Analyze Text and Visualize Data (Advance)

This session shows how you can use Model Context Protocol (MCP) to analyze text collections and turn results into clear visual outputs. We show you step by step how to turn unstructured text to tables and visuals editors trust

  • Hong Qu, Lecturer, Harvard Kennedy School

Adopt

Follow the exact steps news organizations took to build their AI solutions from scratch.

Smart, Confident, and Wrong: Designing Responsible A.I. Tools in the Newsroom

Designing AI tools in a newsroom means working with systems that sound confident even when they are wrong, inside a profession built on accuracy and trust. This session shows how careful product and workflow design helps newsrooms capture the benefits of AI without sacrificing editorial standards. Drawing on tools built at The New York Times, the speakers share practical design principles, demos, and industry examples that focus on constraining hallucinations, tracing sources, and building AI products journalists and audiences can trust.

  • Dylan Freedman, A.I. Projects Editor, The New York Times

Large Language Mathematicians: Public Records in Record Time

AI is great at a lot of things, but math and number crunching are not its strong suits. The American City Business Journals wanted a tool to surface stories from public records, but kept running into the same problem: the LLMs would dream up what it thought were the right numbers, instead of actually doing the math. In this session, we'll explain how we paired a series of LLM inputs with behind-the-scenes JavaScript to transform hundreds of public records into a useful output for reporters and editors. This same logic flow has been applied to anything that requires a precise output: word counts, data analysis, and more. 

  • Tyson Bird, Editorial Product Manager, American City Business Journals

Building Sparks: Using AI to Decide What Stories to Write and When

Newsrooms are moving fast on AI for summarization and workflows. This session asks a different question: what if AI helped journalists decide what stories to write and when to publish them. The session shares how Stacker Newswire built Sparks, an AI feature that analyzes patterns across 1.2 million earned pickups to recommend story topics and timing. The speaker explains how the team separated high-risk editorial decisions from lower-risk ones where AI adds value, and how editorial judgment stays central. The approach, while built for a newswire, offers a model other newsrooms can adapt for staff writers and freelancers.

  • Ken Romano, SVP Product, Stacker

Why you should use AI to monitor local meetings

There are 90,000 local government entities in the United States, and the news industry isn't following nearly as much of this as it once did. AI gives us a path to regaining our footing. The Philadelphia Inquirer will preview Scribe, a tool being developed to track, summarize and score local meetings based on news relevance.

  • Stephen Stirling, Data Editor, The Philadelphia Inquirer
  • Kevin Hoffman, AI Engineer, The Philadelphia Inquirer

Rapid AI Prototyping inside The Minnesota Star Tribune

Legacy newsrooms are built for accuracy, and not necessarily for speed. When the Minnesota Star Tribune decided to build its first audience-facing AI product, we had to act like “pirates in the navy” – navigating a nearly 160-year-old culture while working at startup speeds. In this session, we’ll break down the “Pirate Ship” model used to launch the Culinary Compass and Strib Fair Bot. We’ll share a raw look at how stakeholder research and rapid feedback loops allowed us to bypass traditional “battleship” bureaucracy, providing a practical framework for any legacy organization looking to launch fast without sinking the ship.

  • Frank Bi, Director of Tools & Technology, The Minnesota Star Tribune

Govern

Unpack frameworks and discussions for testing, auditing and managing AI systems responsibly. This track is in collaboration with Poynter.

Do we really need an AI use policy? 

A dynamic session that will chart scenarios of what the newsroom of the future will look like, the risks AI would pose to these future newsrooms and what policies we could put into place today to prepare. Building on the forward-looking product sessions on day 1, this interactive session will lead attendees through future mapping to develop AI ethics guidelines for any newsroom of the future.

The blueprint for success: how to get buy-in and build AI systems that actually work

Many newsrooms experiment with AI, but few turn those experiments into systems journalists trust and use. Teams struggle to align editors, engineers, and product leaders around priorities, guardrails, and clear ownership. This session shares practical lessons from what has worked and what has failed, focusing on how newsrooms earn buy-in and design AI systems built to last. The takeaway is simple: success comes from clear goals, shared standards, and strong product thinking, not from chasing better tools.

  • Ryan Struyk, Director, A.I. Innovation, CNN
  • Heather Ciras, Deputy Managing Editor, Audience, The Boston Globe
  • Rubina Fillion, Associate Editorial Director, AI Initiatives
  • Ole Reissmann, Director AI, SPIEGEL Group

Newsroom Patterns of A.I. Evaluation

Newsrooms increasingly rely on A.I. for reporting and production, but evaluation often lags behind deployment. As A.I. supports tasks like document tagging or drafting alt text, teams need reliable ways to judge quality without adding unnecessary friction for editors. This session shares patterns developed inside a large newsroom to assess A.I. output, align stakeholders, and define success metrics. Attendees leave with practical approaches to measure quality, gather usable data, and build evaluation into everyday newsroom workflows.

  • Duy Nguyen, Senior Machine-Learning Engineer, A.I. Initiatives, The New York Times
  • Teresa Mondría Terol, Engineer, A.I. Initiatives, The New York Times