Amaze guidance on the use of AI in relation to your child’s SEND
In the last few months, Amaze has noticed that more of you are using artificial intelligence (AI) tools like ChatGPT and Gemini for SEND advice, or to help with creating documents relating to your child’s SEND.
Whilst we understand how useful large language model AI can be, particularly for people who may not find it easy to write or organise their thoughts, we are seeing a range of problems arising from the use of AI that we need to warn you about.
Our SENDIASS and benefits advice teams have created the following guidance in response to issues parents and Amaze have encountered around the use of AI. It also suggests ways in which AI can be useful.
What can go wrong with using AI?
1. AI is not a legal expert in SEN law in the UK
Worse, AI is trained to offer a seemingly credible answer to your query, even when it is not equipped to do so.
This means the “risk of incorrect or misleading legal information…is high.” according to the SEND Information, Advice and Support Services Network AI statement (December 2025).
The High Court of Justice also issued the following warning in 2025: “Freely available generative artificial intelligence tools, trained on a large language model such as ChatGPT are not capable of conducting reliable legal research. Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect. The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source.”
Our advice?
2. AI is not a reliable source of medical information
AI doesn’t know your full medical history and can provide false or even dangerous answers to medical queries.
See recent Guardian article: Google AI Overviews put people at risk of harm with misleading health advice and Mind’s recent statement about the use of AI summaries for mental health advice ‘Very dangerous’: a Mind mental health expert on Google’s AI Overviews | Mental health | The Guardian
Being cautious around AI results to medical queries is especially important for young people, as a recent study showed that young people and men were much more likely to use AI for health advice. See AI in NHS care: what’s the impact, and what do people think? | Healthwatch
Our advice?
3. AI cannot fully understand all the particulars of your situation
Even if you give it lots of personal data (which we don’t advise), it doesn’t have the expertise or sophistication to ask you the right questions to build up a realistic picture.
Our advice?
4. AI answers are general and not specific
A successful claim for DLA or PIP or a request for an Education, Health and Care (EHC) Needs Assessment cannot be generic. It relies on meaningful personal detail – on you cataloguing all the individual and specific ways that you are giving extra support to your child because of their disability. For example, AI might produce some sample text that says, “Jo is picky with food” or ‘Jo only eats pizza’. The DWP is instead looking for the kind of real life, detailed testimony that can only come from an individual family. You might instead need to say, “Jo will only eat stonebaked cheese and tomato pizza from Asda, which is a long drive from my house.’
Our advice?
5. Using AI can be detected and may undermine your credibility
If you use AI to generate text for sections of your DLA or PIP claim, for example, you risk your claim being turned down by the DWP. Likewise, we would not advise you use AI to create arguments for why your child needs more SEN support in school. It is always better to speak from your direct experience.
Our advice
What are good uses of AI?
- AI is great for getting started or getting a structure for a letter or document, as long as you replace generic material with details specific to your situation
- AI can be a great tool for to help you translate formal communication from experts/LAs/etc into normal language. Ask it to simplify text or make it more accessible to you as a parent carer. Just be careful that you do not lose information during this simplification.
- AI is brilliant at pulling together advice from lots of different online sources very quickly. Just make sure you ask it directly where it got its information. Then check these sources yourself.
- AI is great at giving you ideas for things you might not have thought about.
- AI can summarise long documents for you or summarise the key points for you after you have read it. Just be careful it hasn’t missed anything.
Further information
- Google AI Overviews put people at risk of harm with misleading health advice
- AI in NHS care: what’s the impact, and what do people think? | Healthwatch
- Charities and Artificial Intelligence – Charity Commission
- Position Statement on the use of Artificial Intelligence (AI) by NHS professionals when answering medicine-related questions
- Very dangerous’: a Mind mental health expert on Google’s AI Overviews | Mental health | The Guardian
