Safety & Safeguards Policy
How LearnAble protects participants with cognitive and processing challenges
Purpose & scope
LearnAble exists to help people who think, learn, or communicate differently. Many of our users are NDIS participants. This policy explains the safeguards we build into every conversation so users can interact with LearnAble safely, with dignity, and with privacy. It is written for public audiences - participants, carers, clinicians, partners and regulators - and explains the standards that guide our AI behaviour and human oversight.
Our core principles
All LearnAble interactions follow a single set of behavioural principles:
Safety first. We refuse anything that could cause physical or emotional harm.
Respect and dignity. We never mock, stereotype or belittle. We assume good intent.
Authenticity and accuracy. We give factual information only; when uncertain we say so.
Privacy and consent. We never request or save personal data without explicit permission.
Accessibility and clarity. We use plain language and offer simpler explanations on request.
Human oversight. Ambiguous or high-risk situations are triaged for human review and intervention.
These principles are built into the system’s behaviour and the operational rules that follow.
Who this covers and age policy
LearnAble is designed for people aged 13 and over. When a participant is under 18 we apply stronger protections: firmer refusal language on sensitive topics, encouragement that carers take part in decisions, tighter privacy handling, no advertising or commercial content, and automatic human review for any high-risk or age-sensitive matter.
Built on responsible infrastructure
LearnAble operates on Google Cloud’s AI infrastructure, which embeds Google’s Responsible AI Framework - a comprehensive set of principles, policies, and safeguards designed to ensure fairness, safety, and accountability in all AI-powered systems.
This provides a strong, independently audited foundation of trust that LearnAble builds upon through its own disability-specific and age-specific safeguards.
Google’s built-in protections
Content Safety Filters: Harmful, explicit, or unsafe material - including violence, sexual content, hate speech, or self-harm - is automatically blocked before LearnAble’s own filters are applied.
Data Privacy & Security: All participant data is encrypted at rest and in transit, with access restricted by role-based Identity and Access Management (IAM) controls.
Responsible AI Governance: Google’s AI models follow strict principles of fairness, transparency, reliability, privacy, and accountability, governed through Vertex AI controls and policy enforcement.
Audit & Transparency: Continuous monitoring and audit logs track system access and model behaviour for accountability.
Certified Compliance: Google Cloud meets ISO/IEC 27001, SOC 2, and Australian Government IRAP certification - aligning with NDIS information security and the Australian Privacy Act 1988.
LearnAble’s additional safeguards
LearnAble’s Trust & Safety Framework extends these protections through additional disability- and age-aware safeguards:
Duty-of-Care Enhancements: Contextual detection identifies cognitive vulnerability and escalates high-risk scenarios for human review and intervention.
Disability-Aware Behaviour Rules: All refusals use plain, empathetic language and maintain accessibility.
Carer and Clinician Integration: Human-in-the-loop oversight ensures carers or authorised professionals can be involved when needed.
Ethical Alignment: Consistent with the WHO’s Ethics and Governance of Artificial Intelligence for Health (2021) and the NDIS AI-Enabled Assistive Technology Framework.
Together, these measures provide defence-in-depth - combining technical reliability, human empathy, and ethical governance.
How we make decisions in a conversation
When a user sends a message, LearnAble combines content filters, contextual signals about vulnerability, and behavioural rules to choose a response. Responses fall into three broad behaviours:
Answering safely: full, low-risk answers for routine, non-sensitive requests.
Cautious redirection or limited answers: when a user asks about a sensitive topic we provide neutral, factual context and refer them to a trusted human or service.
Pause & escalate: if the content suggests immediate risk, exploitation, or serious harm, the conversation is paused and the case is escalated to the appropriate people for human review and intervention.
Risk model (how we treat severity)
We treat interactions across a spectrum from low to critical. Publicly we describe these as: low (everyday), moderate (sensitive), high (serious), and critical (imminent danger). For any interaction that is high or critical, our system logs the incident, preserves the conversation context for review, and triggers human workflows so trained staff - and where appropriate the participant’s nominated carer or clinician - can intervene.
Detailed guardrails (what we never do; how we respond)
Violence, harm and dangerous behaviour
Never: provide instructions, encouragement, or strategies to harm self or others, or to carry out dangerous acts.
How we respond: immediate refusal; the system uses calm, empathetic language and offers safer alternatives (for example, ways to manage anger). Credible threats or plans are escalated to the appropriate people for human review and intervention.
Bullying, harassment and hate
Never: produce slurs, demeaning content, or material that encourages exclusion or ridicule (including disability-based mocking).
How we respond: refuse and model respectful language; where a pattern of targeted bullying appears we escalate for human follow-up and support.
Sexual content and exploitation
Never: engage in explicit sexual description, sexualised roleplay, erotic content for minors, or grooming behaviours.
How we respond: provide only neutral, factual information about consent, puberty or sexual health; for young users we use stricter language and automatically escalate suspected grooming or requests to share intimate images to the appropriate people for human review and intervention.
Misinformation and deception
Never: invent facts, present unverified medical or legal claims as true, or impersonate professionals.
How we respond: we correct misinformation where reliable sources exist, and we point users to trusted sources. If the content is uncertain, we say so and recommend a professional.
Privacy and sensitive data
Never: collect, store or share personally identifiable information without clear, explicit consent.
How we respond: the system detects identifiers and masks them; before saving any personal detail we ask “Do you want me to save this for next time, or just use it now?” For users under 18 we minimise retention and enable carers or guardians to request deletion or restriction.
Intellectual property
Never: reproduce or distribute copyrighted material without permission.
How we respond: refuse and suggest legal or open-licence alternatives.
Spam, scams and commercial activity
Never: assist with scams, fraudulent schemes, fake accounts, or deceptive promotions.
How we respond: refuse and provide education on spotting scams; escalate when patterns suggest targeted exploitation or fraud.
Financial advice
Never: give investment, loan, tax or trading recommendations.
How we respond: we explain basic financial concepts or budgeting, and direct users to licensed advisers.
Healthcare and medical guidance
Never: provide diagnoses, prescribe medication, or create treatment plans.
How we respond: provide general health information and encourage users to consult a medical professional; for urgent medical risk we instruct the user to seek immediate help and escalate for human review and intervention.
Eating disorders and body image
Never: provide advice on restriction, purging, or unhealthy weight control.
How we respond: respond with empathy, avoid numbers or prescriptive diet language, offer support resources and escalate any sign of active restriction, purging, or imminent risk to the appropriate people for human review and intervention.
Legal and professional advice
Never: provide specific legal strategies or act as a substitute for a lawyer.
How we respond: explain legal terms at a high level and point users to qualified professionals.
Tone and delivery (how refusals should feel)
Every refusal must follow the same pattern: acknowledge → explain → redirect. That means we first acknowledge the user’s feeling (so they feel heard), explain the boundary in plain language, then offer a constructive next step (ask a carer, contact a clinician, show trusted resources). We never use sarcasm or shaming; tone is calm and human.
Human review and intervention (what happens after escalation)
When the system identifies a high or critical risk it:
Logs the interaction securely with metadata necessary for triage.
Flags the case for trained staff who carry out a timely human review.
If protocol requires, notifies the participant’s nominated carer, guardian or clinician in line with consent agreements and privacy law.
Where immediate danger is suspected, our staff follow emergency-response procedures and partner with local services as required.
We describe this publicly as escalation to the appropriate people for human review and intervention - those people are the trained LearnAble safeguarding team, clinical partners, or nominated carers depending on the context and consent settings.
Data handling, consent and retention
We encrypt all personal and conversational data in transit and at rest. We keep minimum necessary logs to support safety and auditing. Users are asked for consent before we save profile details; without consent we treat input as temporary. Carers and guardians can request deletion or restriction of their dependent’s data at any time.
Onboarding carers and guardians
For participants under 18, and for many adult participants who rely on support, onboarding includes an invitation for carers or guardians to set preferences, emergency contacts, and consent options. LearnAble encourages co-participation so that critical conversations can involve a trusted human when needed.
Continuous review and co-design
Our Trust & Safety practice is not static. We regularly review incidents, update behavioural rules, and co-design improvements with independent occupational therapists, clinical advisors, NDIS safeguarding specialists, and people with lived experience. We publish policy updates where appropriate.
How to raise concerns or report issues
If you have a question or want to report a safety issue, please contact:
safety@hellolearnable.com.au
We aim to respond to safety reports promptly and transparently.