Slow Down First. That’s How Newsrooms Move Faster With AI.
Learnings and observations from the Newsroom AI Lab’s first cohort working with Wisconsin Watch and the Connecticut Mirror.
Most newsrooms assume they need an army of engineers or a big tech budget to experiment with AI. They don't. What they need is a repeatable process, the right collaborators and time to understand their problems before reaching for tools, AI or otherwise.
That's the premise behind the Newsroom AI Lab. In our first cohort, we partnered with Wisconsin Watch and The Connecticut Mirror to guide their teams through the process of beginning not with technology, but with conversations, interviewing colleagues across roles to surface the friction in – and opportunities to expand – their daily work. From there, they each picked one problem to solve. We kept the scope tight and the timeline short. The goal wasn't to showcase AI, but to see what would actually help.
What Each Newsroom Built
Wisconsin Watch focused on accessibility and the descriptions attached to photos on their site for people using screen readers, known as alt text. Editors were already writing these manually, but quality and coverage varied because expectations weren't clear. Before introducing any automation, the team had to answer basic questions: What is alt text for? Who is it serving? What does good alt text look like?
Once the Wisconsin Watch team dug into accessibility best practices to create an official policy guide, they built a custom GPT to generate first drafts for all images. The goal wasn't full automation, it was giving the web producer something to edit rather than a blank page. With AI generating drafts and humans reviewing them, alt text coverage and quality both significantly improved.
The Connecticut Mirror worked with a large collection of legal documents that had been in their possession for some time, but they hadn’t been able to investigate due to the huge amount of manual review that would be required. Formats varied. Quality varied. Summarization seemed like the obvious answer. In fact, it was not, because it was not possible to collectively define what a useful summary would actually look like.
This uncertainty revealed the real need: reporters just needed some structure to the trove, not summaries. The Connecticut Mirror used AI to group the documents by the legal case they pertained to, bringing a more manageable order to the stack of documents, which enabled reporters to start digging into the nitty gritty of the actual investigation.
What Cohort 1 Taught Us
- Unclear goals stop automation fast: When teams first considered introducing automation via AI, the first thing it surfaced was disagreement about the desired output. What counts as good alt text? What makes a document summary useful? The automation work couldn’t start until clear answers to these kinds of questions were answered.
- Manual work is where the real insight lives: Doing the work by hand surfaced judgment calls and hidden assumptions. Skipping this step makes it impossible to objectively evaluate AI output against clear format and quality standards.
- Efficiency alone is not a goal: “Saving time” shows up often in conversations about which projects to tackle, but this motivation isn’t inherently valuable on its own; more thoughtful prioritization conversations come from tying the value of solving any given problem back to the organization’s mission, commitments to quality reporting and serving their audience.
- Small scope beats ambitious design: Breaking down big ideas into tightly defined smaller efforts leads to adding value faster and delivering usable results quickly. Perfect is the enemy of the good enough; avoiding full integrations keeps teams focused on the outcomes they need.
- Familiar systems increase adoption: Both newsrooms were able to address their goals through working mostly with tools they already had access to. Because they had a clear definition of needs they were able to evaluate whether tools they already had on had could do the job; avoiding the common desire to try out and compare an endless list of possible tools that could be useful. This lowered friction and increased the chance the work would stick, and let them see how to easily apply learnings elsewhere.
- Documentation creates leverage: Writing down decisions, failures and examples aligned teams and produced assets others can reuse.
How This Changes Our Approach
Cohort 1 of the Newsroom AI Lab reshaped how we run the Lab. Rather than starting at a high level with AI Fluency concepts, we now immediately dive into coaching partners through identifying and defining problems they could address. We reduce work that is not tied directly to building a solution. We introduce AI only after teams can describe what “good” looks like for a specific project. We are clearer about project boundaries. Solving the problem must provide tangible value to the organization. It must involve a meaningful use of AI. It must not be so mission critical that there isn’t room for experimentation. Stakeholders must be available and ready to participate in the building process.
How Other Newsrooms Can Try This Now
Hacks/Hackers is working on a more robust how-to guide based on The Lab. In the meantime, here are a few simple steps to start with:
1) Start with empathy, not ideas: Before discussing technology, talk to people who do the work or consume the content. Interview five colleagues across roles, reporters, editors, audience, business and production - focus on moments that feel broken, confusing or fragile.
Useful prompts:
- What part of your work feels hardest right now?
- What takes longer than it should?
- What do you work around instead of fixing?
- What do you wish was possible but isn’t currently?
- Do not try to solve anything yet or jump to thinking of solutions. Just listen and ask a lot of questions to understand their experiences and what’s important to their work
2) Turn observations into a problem statement: Pick one challenge or issue that shows up more than once. Write one sentence. Avoid naming tools. Here are a couple of templates:
- Potential internal workflow problem statement: A person responsible for [specific internal work] needs a way to [do something that’s difficult today] so they can achieve [editorial or operational outcome]. Today, they [what happens now], which creates [risk, friction or quality issue].
- Audience problem statement: A person in our audience who [context or situation] needs [information or experience] so they can [understand, decide, or act]. Today, they [what they experience now], which leads to [confusion, exclusion, or loss of trust].
If you cannot write a crisp problem statement you’re not quite ready to start solving it. Go talk to more people and ask more questions to make sure you deeply understand the problem
3) Make the invisible visible with a short problem brief: Keep this to one page. This is alignment work. Describe:
- What the experience looks like today
- Where friction shows up
- Why the work matters to the organization
- Who is affected
- What “better” would feel like
If people disagree on these answers, pause. Resolve that first.
4) Do the work manually before using AI: This step matters more than any prompt. Do the task yourself. More than once.
- Write down every step: Step 1, 2, 3 …
- Mark where judgment appears, where people disagree and where standards are unclear. Those are signals. They tell you what needs definition before automation.
5) Use the 4Ds to decide where AI fits: Only now decide whether AI belongs in the workflow. Use the 4Ds to scope responsibly.
- Delegation: What specific task should AI handle. What should it never do.
- Description: What exactly are you asking the system to produce in plain language.
- Discernment: How will humans judge whether the output is good or usable.
- Diligence: What guardrails matter here. Editorial standards. Risk. Accountability.
Do not proceed until you have a clear answer for each.
6) Start smaller than you think: Avoid integrations. Avoid automation at publishing. Avoid edge cases. These nice-to-haves tend to add minimal extra value for a lot more effort, and they make it take longer to get to a testable minimum viable product. Instead, build the simplest version that lets a real person try the workflow and react to it. Test it with the people who will actually use it. Then iterate. This approach gets you to value – solving some part of your problem, even if it’s a bit scrappy – as fast as possible. From there you can improve and refine the solution.
7) Meet regularly, take notes: You can accomplish a lot in an hour if your meetings are focused and discussion remains on track. Our newsroom check-ins were weekly, one-hour meetings. Create an agenda, so everyone knows the objective of the meeting, and end with a shared understanding of any tasks people are responsible for by the next meeting. Executing long-term projects can slip through the cracks in a newsroom because every day people are keeping a lot of plates spinning. Dial in the length and frequency of meetings to work for your team, and don’t burden yourself with unrealistic schedules and tasks.
What Comes Next
Cohort 1 newsrooms are now repeating this process on new projects with continued check-ins with our team. Our curriculum for Cohort 2 organizations builds on these lessons with clearer onboarding, stronger examples, and tighter pacing.
The core lesson holds. Newsrooms keep pace with technology by slowing down at the start. Define the problem, do the work manually, then decide where AI might fit into a solution. This approach turns AI experimentation into real value, rather than distraction without direction.