Start here
Build confidence with prompts that only access public data and will be genuinely helpful to your working day.
Go to Start hereA practical guide for Australian public servants
Confident, safe Microsoft Copilot use, starting with cases that never touch agency information.
The Australian Government is actively adopting Artificial Intelligence (AI) to enhance public service delivery, productivity, and policy evidence, guided by the Policy for the responsible use of AI in government (updated Dec 2025). This means that the APS are being actively encouraged to engage with AI tools, however many people are not confident on how to engage with this technology safely.
This guide is designed to provide support and potential use cases for exploration. It is intentionally designed to showcase how these cases can be used across the Copilot licence levels. If you are looking for additional use cases, or would like these prompts adapted for different tools, please reach out.
Build confidence with prompts that only access public data and will be genuinely helpful to your working day.
Go to Start hereThe complete guide including all use cases by Copilot licence tier, scheduled workflows, and a prompt to meet the requirement for efficiency gains.
Go to the guideBuild AI capability in your team without cutting corners on data handling.
Go to managersPractical advice to help you lead this transformation with confidence.
Go to SESKnow your ground
Before you start, work out which setup your agency uses. The answer changes what is in scope for you. If you do not know, that is your first question for ICT.
Consumer mode, or signed in without enterprise data protection. Treat it as a public tool. Public information only. No official content, no personal information, no classified work.
Runs inside your agency tenant with enterprise data protection. Can reference internal content you already have access to, subject to agency policy.
M365 Copilot and web mode with commercial data protection. Useful, but needs discipline about which mode you are in and what you are putting in.
If you cannot confirm a use case is approved for your role, treat the tool as the most restrictive version.
For all APS levels. Build confidence first.
If you are wary about using AI at work, you are not being difficult. You are being professional. Privacy, security, and information handling obligations exist for good reasons, and they matter.
This section is for people who want to build a bit of confidence before they go near real agency work. Three things to try this week. Each one uses no agency information at all.
Public information in, public-information-shaped output. Stay in that lane until your agency’s ICT, security, or AI governance team has told you a broader use is approved for your role.
Pick a framework, concept, or policy area you know you should understand better. Ask Copilot to teach it to you.
Act as a patient, plain-English teacher. I want to build a working understanding of [topic].
Start with a two-paragraph orientation for someone with no prior knowledge. Then give me the five things a practitioner would need to know, and three common misconceptions with what is actually true.
Use Australian examples where relevant. No jargon unless you define it.
What to change: the topic, the depth you want, and whether you would like a test question at the end.
Copilot can scan publicly available Australian news and announcements and tell you what is relevant to your portfolio.
Scan publicly available Australian sources from the last 24 hours for news relevant to [portfolio area].
Give me three to five items in plain English, one sentence each. For each item, add one line on why it matters for someone working in this area.
Keep the whole brief under 300 words. Flag anything where source reliability is uncertain.
What to change: the portfolio area, the jurisdictions you care about, the word count.
Copilot can play the other party in a roleplay. Keep the scenario generic and the names made up.
Act as [generic role, for example a frustrated community member] in a roleplay. The scenario: [generic setup with no real names, projects, or locations].
Play the role with realistic emotional range. Start the conversation. I will respond in character.
After six exchanges, pause and tell me what I handled well and two things I could have done better.
What to change: the scenario, your role, the number of exchanges, the type of feedback that would help.
The full guide has more use cases across all three Copilot tiers, plus scheduled workflows and a bottleneck audit you can run on your own week. There is no prize for moving fast on something that needs care.
For all APS levels. Ready to go deeper.
A working set of low-risk, high-value ways to use Microsoft Copilot when you work for government. Structured by Copilot tier. Designed to sidestep data handling concerns where possible, and to flag them plainly where they apply.
First. If a use case can be done without any agency information, do it that way. Public information, synthetic examples, and conceptual work sidestep most risk concerns in one move.
Second. Lateral beats obvious. Summarising your inbox is low value and, depending on content, higher risk. Helping you think, research, and scan external signals is high value and, structured well, lower risk.
These work on any Copilot tier because they never reference government information. Start here if you are still building confidence.
A recurring scan across publicly available sources for news relevant to your portfolio. The output is not a news feed. It is a shaped brief that tells you what happened, why it matters, and where action or commentary may be needed.
Act as a seasoned Australian Public Service media monitor. You are good at spotting what matters to [portfolio area] at [agency], and translating it for a senior audience.
Scan publicly available Australian sources from the last 24 hours. Include ministerial media releases, Hansard, Senate committee announcements, peak body statements, AAP, ABC, The Guardian, and the major state and territory newspapers.
Return a brief in three parts:
1. What happened. Three to five items, plain Australian English, to APS writing standards.
2. Why it matters for [portfolio area] at [agency]. One line of implication per item.
3. Watch items. Anything a policy officer, an EL1, or an SES Band 2 should have on their radar today and over the next 7 days.
Keep the whole brief under 400 words. Flag anything where source reliability is uncertain. If you are not sure a claim is accurate, say so, do not present it as fact.
What to change: the portfolio area, the jurisdictions, your trusted sources, the length. For a weekly version, ask for a Friday wrap instead.
When a new Minister, Secretary, or committee chair is appointed, or when you pick up a new brief, use Copilot to build a public-information stakeholder picture. Their speeches, interviews, committee remarks, and stated priorities.
I need a public-information briefing on [name, current role].
Using only publicly available Australian sources (official bios, speeches, Hansard, committee transcripts, media interviews from the last 24 months), give me:
1. Stated priorities and recurring themes in their public remarks.
2. Three direct quotes with dates and sources, showing how they talk about [topic area].
3. Any public views on [specific policy question].
4. A short note on stylistic tendencies (detail-oriented vs high-level, data-driven vs values-driven, formal vs plain-spoken).
Include source links. Flag anything uncertain or where sourcing is thin.
What to change: the person, the topic, the time window.
Use Copilot as a patient teacher for frameworks, concepts, or domains you need to understand but have not had time to study. Useful when moving between adjacent policy areas, or when a promotion lands you in a brief you do not know cold yet.
Act as a patient, plain-English teacher. I want to build a working understanding of [concept or framework, for example the Commonwealth Procurement Rules, or actuarial risk in social policy].
Teach me in these stages:
1. A two-paragraph orientation, assuming no prior knowledge.
2. The five most important things a practitioner would need to know.
3. Three common misconceptions, and what is actually true.
4. A short scenario-based question so I can test my understanding. Wait for my answer before giving feedback.
Use Australian examples where relevant. No jargon unless you define it.
What to change: the concept, the depth, the examples. If you have a specific use for the learning, tell it so the examples land close to real work.
If you want to sharpen your writing without pasting real work, write a short synthetic paragraph on the type of content you usually produce. Not your actual content. Something that mirrors the structure and vocabulary without containing any real information.
Here is a short synthetic example of the kind of writing I produce: [paste 150 to 250 words of fabricated but representative text].
Act as a plain-English coach. Give me:
1. A rewritten version that meets Australian Government Style Manual plain-English standards.
2. A short list of the patterns you changed and why (for example, passive to active, bureaucratic nominalisations, hedging).
3. Three rules I can take into future writing.
Keep your tone direct and practical, not academic.
What to change: the synthetic example, and the standards you want applied (plain English, specific reading level, particular audience).
Stakeholder escalation. A performance conversation. A community meeting that could go sideways. Copilot can play the other party so you can rehearse. Keep the scenario generic, the names fictional, the agency unnamed.
Act as [role, for example a concerned community member] in a roleplay. The scenario is:
[Brief generic setup, no real names, projects, or locations. For example: "A local resident is upset about a consultation process for a generic infrastructure project. They feel unheard and are escalating."]
Play the role with realistic emotional range. Start the conversation. I will respond in character as the [role, for example agency representative].
After six exchanges, pause and give me feedback on:
1. What I handled well.
2. What I missed or could improve.
3. Two alternative lines I could have used at key moments.
What to change: the scenario, your role, the number of exchanges, the feedback focus.
Before you start drafting advice on a policy question, use Copilot to map what is already in the public record. Submissions to past inquiries, peak body positions, academic commentary. You are not getting the answer. You are getting the terrain.
I am researching the public landscape on [policy question, for example regulation of AI in recruitment].
Scan publicly available Australian sources and return:
1. The three to five most cited positions, with who holds them.
2. Points of genuine disagreement, as opposed to surface-level framing differences.
3. Any recent (last 12 months) shifts in the debate.
4. Peak bodies, academics, or think tanks whose work on this is worth reading.
Provide source links for each point. Flag any claim you are not confident about.
What to change: the topic, the time window, whether to include international comparisons.
These run inside your agency tenant with enterprise data protection. They use content you already have access to. Always check your agency has enabled the relevant features and confirmed the use cases.
“Who in my agency has worked on something like this before.” Copilot can search across content you have permission to see, and surface people, projects, or documents relevant to a question you are wrestling with.
I am working on [short non-sensitive description of the problem space]. Search the content I have access to across SharePoint, Teams, and email, and identify:
1. People who appear to have worked on similar problems. Return names, roles, and the documents or conversations that indicated their involvement.
2. Documents that address related questions.
3. Any relevant past decisions or precedents.
Rank results by how closely they match. Exclude anything older than three years unless foundational.
What to change: the problem description (keep it high-level), the time window, the sources. You are asking it to find people, not to read briefs.
When you pick up a new brief, ask Copilot to help you get up to speed by orienting you to the shape of the content in that area. Not to summarise individual documents. To map the territory.
I have just picked up [project or team area]. Give me a first-day orientation using the content I have access to:
1. The shape of the territory. What are the main workstreams, documents, and decision points?
2. Key people. Who owns what? Who should I meet first?
3. Recent activity. What has happened in this area in the last 90 days?
4. Open questions. What looks unresolved or in flight?
Keep it to one page. I will follow up on specific threads.
What to change: the project or team area, the time window, the level of detail.
Instead of asking Copilot to do your work, ask it to look at your work patterns and tell you where the friction is. Your calendar, your task list, your meeting load. Only works inside M365 because it needs visibility of your actual activity.
Act as a workflow consultant looking at my activity over the last four weeks.
Do not read the content of emails or documents. Look at patterns only:
1. Where am I spending the most time?
2. What meeting patterns look inefficient? (back-to-back scheduling, no prep time, recurring meetings with low attendance).
3. What recurring threads seem to eat time without progressing?
4. Where would a different rhythm, batching, or delegation help?
Give me three changes I could make next week, in priority order.
What to change: the time window, whether to look at content or metadata only.
One of the highest-value uses of Copilot is not any single task. It is a structured think. Coach yourself through identifying where your week actually loses hours, then design small interventions.
This works on any tier because you describe tasks at a conceptual level. No document contents, no stakeholder names, no sensitive detail.
Act as a workflow design coach. I want to do a bottleneck audit on my week.
Ask me questions to identify:
1. The three to five tasks that consistently take longer than they should.
2. For each, what makes them slow (information gathering, coordination, rework, approvals, context-switching).
3. Whether the bottleneck is in the task itself, the inputs, or the handoffs.
Once we have the picture, help me design three interventions. For each, tell me:
- What would change.
- What I would need to set up to make it work.
- Where AI tools could help, and where they could not.
Ask me one question at a time. Do not move on until I have answered.
What to change: the scope (your week, a specific project, a workflow), and whether you want it to push hard or stay gentle.
Where your licence supports scheduled prompts or agent-style workflows (confirm with your ICT team, Microsoft updates this regularly), build these for compound benefit.
What happened in your policy area this week, and what is coming up next week from public signals.
Scan publicly available Australian sources for [portfolio area] from the last seven days and the next seven days.
Past week: three to five developments with implications for my work.
Coming week: scheduled public events (committee hearings, consultation closures, media events, legislation introductions).
Plain English, one page, bullet points.
What comparable jurisdictions (UK, NZ, Canada, specific EU countries) have released on a topic. Useful for policy officers who need to know what is happening elsewhere.
Scan official government and reputable policy research sources in [UK, NZ, Canada, and the Netherlands] for developments on [topic] in the last 30 days.
For each jurisdiction, return: what was announced or released, the publishing body, and a one-line relevance note for an Australian policy context.
Include links. Flag anything where the source is unofficial or the translation is uncertain.
Scheduled tasks consume public information and produce outputs to you. Before relying on any recurring output, spot-check accuracy. Verify before it informs actual advice.
For APS 6 to EL1. Building AI capability in a team.
Your people’s caution about AI is an asset, not a blocker. The biggest risk in AI adoption is not that teams move slowly. It is that they move fast on the wrong things.
This section is for managers, team leaders, and coaches who want to build real, durable AI capability without cutting corners on data handling.
Quieter than the hype version.
It sounds like: “Here is what we know is approved. Here is what we are checking on. Here is the lane we work in until that changes. Come to me if something is unclear before you act on it.”
It does not sound like: “We need to move faster.” Or: “Everyone else is doing it.” Or: “Just try it and see.”
Model the safe lane yourself. This media scanning prompt uses only public information, so it works on any Copilot tier. Run it on your portfolio, share the output with your team, and let them see how you use it.
Act as a seasoned Australian Public Service media monitor. You are good at spotting what matters to [portfolio area] at [agency], and translating it for a senior audience.
Scan publicly available Australian sources from the last 24 hours. Include ministerial media releases, Hansard, Senate committee announcements, peak body statements, AAP, ABC, The Guardian, and the major state and territory newspapers.
Return a brief in three parts:
1. What happened. Three to five items, plain Australian English, to APS writing standards.
2. Why it matters for [portfolio area] at [agency]. One line of implication per item.
3. Watch items. Anything a policy officer, an EL1, or an SES Band 2 should have on their radar today and over the next 7 days.
Keep the whole brief under 400 words. Flag anything where source reliability is uncertain. If you are not sure a claim is accurate, say so, do not present it as fact.
What to change: the portfolio area, the jurisdictions, your trusted sources, the length. For a weekly version, ask for a Friday wrap instead.
If the answers are not yet clear, that is itself a useful finding. Your team cannot move confidently on ambiguous ground.
For SES Band 1 to 2. Setting tone across a cluster.
The pressure is on APS leaders to help their people engage with AI. Everyone is getting the same generic advice: give it a go, but be careful. This guide tries to bridge that gap in a pragmatic and accessible way, noting that all agencies have set up different processes, risk appetites, and tool licences.
Workforce caution on AI is a leading indicator of maturity and demonstrates that your people appreciate the gravity of the risk. At the moment, the challenge is getting them to start. In a few months, that risk has the potential to shift to people becoming overly confident and not exercising the same due diligence (read: Dunning-Kruger effect).
This guide starts with useful prompts and advice on how to use them. In the coming weeks, we are hoping to evolve this to look more to the future: how we keep people talking about how they are actively managing risks in their AI use, so that increased use does not concurrently grow risk.
We would love to have people contribute. If you have a great use case to work through, or have had a sticky hypothetical escalated, let us know. The more we collectively learn now, the less likely the policy will be rigour tested in the courts.
Each time you communicate to the team about AI use, be clear about:
Conceptual. No agency data required. Works on any Copilot tier.
Act as a strategic advisor to a senior executive in the Australian Public Service. I lead [portfolio or function] with a cluster of roughly [size] people across [APS levels].
Before I communicate a position on AI capability to my people, help me stress-test my thinking in three areas:
1. Governance. What decisions do I need to have made (or confirmed with our AI governance body) before I communicate a position to my cluster?
2. Signal. What tone and language will my SLT and my team read from the position I take? Where am I at risk of sending a signal I did not intend?
3. Sequencing. What should be in place before I encourage broader use, and how do I sequence investment across capability, licensing, and assurance?
Ask me one question at a time. Use Australian Public Service context. Do not assume I have already resolved any of the above. Push back where my reasoning is thin.
What to change: portfolio description, cluster size, APS level mix, and how hard you want the advisor to push. For a softer version, ask it to coach rather than pressure-test.
For everyone
Go to your agency’s privacy officer, security advisor, legal team, or AI governance body before you proceed if any of the following apply.
Caution is professional good judgement. This is not about removing it. It is about finding the wide space where AI helps you think, scan, and learn, without asking it to touch information it should not see.
The Protective Security Policy Framework, the Information Security Manual, the Privacy Act 1988, and DTA guidance on AI use in government all update regularly. Microsoft also updates Copilot features, licensing behaviour, and data handling rules regularly. Confirm anything described here with your ICT team or current Microsoft documentation before you build a workflow on it.
Note from the author
I spend a lot of time with public servants who understand and are on board with the government’s position to increase AI use, and yet still don’t feel confident about how to enact the policy. Essentially, they’re worried about doing the wrong thing with AI and potentially creating another RoboDebt.
While it’s heartening to be reminded about how much our public service truly does care, it does go to show that a lot of advice hasn’t been grounded in individuals’ day to day. I thought I was helping to move the dial with my Deficit-first Framework, but after 15+ years of working in implementing technical change in government, I’ve learnt people don’t need frameworks; they need direction and contextualised help.
The majority of people I’ve spoken to have read the policies and have a broad understanding that AI can be helpful. The issue is, for a whole bunch of reasons, they can’t see how it will be helpful at work for them.
In creating this, I’ve tried to think of case studies that surface relevant and trustworthy information from publicly available sources in a way that makes the most sense to the individual. Most advice tells people to start with data inside the system, which makes it inherently higher risk. These use cases are designed to give you a meaningful result from the get go.
I will keep updating this as the tools and the rules evolve. If something is out of date, or if you have a use case that deserves a place, please let me know. Also, if you have had a win, I’d love to showcase it. Please send me an email: hello@theunordinary.co.
What this is not
Questions, corrections, or suggestions are welcome at hello@theunordinary.co.