What is this thing called AI written on a blue Post-it

What is this thing called AI? Developing AI literacy resources for communities

Over the past few months, we’ve been developing new AI literacy resources as part of our Data For Communities programme. Our aim is to create workshops that help people understand what AI can and can’t do, gain hands-on experience, and making their own informed choices about choosing to using it, or choosing not to.

Drawing on our teams’ experience working with communities, we spend three session mapping out what these workshops could look like. Here’s what we developed.

Session 1 – Getting to grips with the basics

We began by reflecting on what people have told us they want to know about AI. From our work in communities, we’ve heard consistent practical concerns:

  • Will it take my job?
  • Can I trust what it tells me?
  • What about privacy and energy use?

We compared tools like ChatGPT, Claude, Perplexity, DALL·E, and Bing Image Creator, considering how accessible they are and what barriers people face, from confusing interfaces to not knowing how to phrase a question.

We worked on breaking down jargon into plain English:

  • Machine learning → software that learns from patterns
  • Large language models → tools that predict text
  • Neural networks → systems that recognise patterns

My favourite explanation was “AI is like very advanced predictive text—useful, but you still need to check its work.”

Session 2 – Thinking critically about AI

This session focused on understanding when AI is useful and when to be cautious. We explored how confident-sounding answers given by AI can still be wrong, and how bias can appear when AI describes local neighbourhoods or social issues.

We developed approaches to fact-checking AI-generated content and created simple workflows to verify claims using trusted sources.

We also discussed the environmental cost of AI, weighing when its use is genuinely worth the energy—such as when supporting funding applications or generating images for community marketing.

Session 3 – Building practical toolkits

The final session explored what communities actually need to use AI responsibly and effectively. We began to shape a “Community AI Starter Toolkit” that might include:

  • A quick-reference card for fact-checking
  • A checklist for privacy and responsible data use
  • A decision flow for when to use (or avoid) AI
  • Prompts for critical reflection and ethical awareness

We explored how these might apply to real situations we’ve encountered: charities using AI for admin tasks, councils considering AI in decision-making, or community groups wondering where AI fits in their work.

What’s Next

Now that we’ve developed the initial materials, the next stage is to test them with real communities. We’re running three pilot workshops online, open to anyone who wants to participate. These sessions will give us crucial feedback to refine the content and make sure it genuinely meets community needs.

We’re looking for participants from all backgrounds—whether you’re curious about AI, concerned about it, working in community groups, or just want to understand what all the hype is about.

If you’re interested in participating and helping us shape these resources, email sam@opendatamanchester.org.uk and we’ll add you to the waiting list.