Safety & Safeguards Policy
Version: 1.0 (February 2026)
Company: Learnity Pty Ltd trading as LearnAble Technologies
(Easy Read version available here)
How LearnAble protects participants with cognitive and processing challenges
Purpose & Scope
LearnAble exists to help people who think, learn, or communicate differently. Many of our users are NDIS participants. This policy explains the safeguards built into every conversation so users can interact with LearnAble safely, with dignity, and with privacy.
It is written for public audiences - participants, carers, clinicians, partners, and regulators - and explains the standards that guide our AI behaviour and human oversight.
The Cognitive Accessibility Layer™
Safety in LearnAble is not simply a filter applied to AI outputs. It is built into the architecture itself. The Cognitive Accessibility Layer™ (CAL) is the foundational system that sits between each participant and the broader digital world - adapting information, pacing, tone, and decision support to each person's unique cognitive, emotional, relational, and environmental needs.
This is not a feature. It is cognitive infrastructure - and it is what makes LearnAble fundamentally different from general-purpose AI.
The CAL is made up of six interdependent layers, each of which contributes to safe, personalised interactions:
Layer | Safety Role |
|---|---|
Cognitive Blueprint Engine™ | Personalises every interaction across six dimensions - cognitive signature, emotional regulation patterns, identity, relationships, routines, and preferences. Ensures support is tailored to each participant, not a generic population. |
Understanding & Context Analyser | Detects intent, cognitive demand, overwhelm cues, and dysregulation signals in real time. Ensures the system responds to what a participant actually needs, not just what they literally say. |
Cognitive Care Standards Library™ | An evidence-based clinical framework developed with occupational therapists, embedding disability best practice, trauma-aware communication, supported decision-making models, and NDIA safeguarding principles directly into the system's behaviour. |
Cognitive Reasoning & Adaptation Engine | Simplifies information, breaks tasks into manageable steps, adjusts pacing, and balances cognitive load. Prevents the kind of overwhelm that leads to unsafe decisions or disengagement. |
Cognitive Safety Framework™ | NDIA-aligned guardrails that govern category-based refusals (medical, legal, risk), trauma-aware communication protocols, safe redirection pathways, high-risk flagging, and carer/OT escalation. This is where safety rules are operationalised. |
Response Transformation & Output Generator | Converts AI reasoning into voice-first, Easy Read-style, accessible outputs matched to each participant's profile. Ensures the format of communication is as safe as its content. |
Our Core Principles
All LearnAble interactions are governed by a single set of behavioural principles, embedded at every level of the Cognitive Accessibility Layer™:
Safety first: We refuse anything that could cause physical or emotional harm.
Respect and dignity. We never mock, stereotype, or belittle. We assume good intent.
Authenticity and accuracy: We give factual information only; when uncertain we say so.
Privacy and consent: We never request or save personal data without explicit permission.
Accessibility and clarity: We use plain language and offer simpler explanations on request.
Human oversight: Ambiguous or high-risk situations are triaged for human review and intervention.
Independence over dependence: We guide and prompt without taking control. Our goal is always to build capacity, not create reliance.
Our Ethical Approach
LearnAble's safety policies do not exist in isolation. They are grounded in a comprehensive ethical framework, published as our Ethical AI & Responsible Data Use Policy available at hellolearnable.com.au/ethics.
That policy governs how we design, test, and use artificial intelligence - and it sits alongside this Safety & Safeguards Policy and our Privacy Policy (hellolearnable.com.au/privacy) as the three documents that collectively define how LearnAble treats participants, data, and technology.
The ethical foundations we build from
Our ethical approach combines the WHO's six principles for AI in health with LearnAble's own values, and is aligned with:
World Health Organization (WHO) - Ethics & Governance of Artificial Intelligence for Health
FUTURE-AI - Principles for Trustworthy AI in Health
NDIS AI-Enabled Assistive Technology Framework
Australian Privacy Act 1988 and the Australian Privacy Principles (APPs)
AHPRA guidance on safe and professional use of AI
What we commit to, ethically
Our published ethics policy makes seven core commitments that directly shape how LearnAble behaves in every interaction:
Human Autonomy. AI supports decisions; it never makes them. Participants remain in control of their choices, data, and support at all times.
Wellbeing & Safety. We test thoroughly and act quickly on any safety concern. Safety is not a compliance obligation - it is the design brief.
Transparency. We explain what our AI does and why, in language anyone can understand. We disclose when users are interacting with AI and publish summaries of how our systems work.
Accountability. We take responsibility for how AI performs and how it affects participants. Named ethics and privacy leads oversee bias, safety, and privacy across the organisation.
Fairness. We actively monitor for bias and test our systems across diverse cognitive profiles, communication styles, and cultural contexts to ensure equitable performance.
Co-design. LearnAble is built with the people it serves. Participants, carers, and clinicians help shape every stage - from design through to real-world use and continuous improvement.
Independence, not dependence. AI will never replace human care, consent, or clinical judgement. Our goal is always to build participant capacity, not create reliance on the platform.
Governance and transparency
Our ethics governance includes a regularly updated risk register, named accountability owners, and a commitment to publishing transparency reports about how our AI systems work and perform. We conduct annual independent ethics reviews and monitor for model drift, bias changes, and performance shifts on an ongoing basis.
This policy is reviewed every six months - or sooner if laws or standards change, technology advances, or participant needs evolve. The latest version is always available at hellolearnable.com.au/ethics. Questions or feedback can be directed to support@hellolearnable.com.au.
Who This Covers & Age Policy
LearnAble is designed for people aged 13 and over. When a participant is under 18, stronger protections apply: firmer refusal language on sensitive topics, encouragement for carers to participate in decisions, tighter privacy handling, no advertising or commercial content, and automatic human review for any high-risk or age-sensitive matter.
How We Make Decisions in a Conversation
When a user sends a message, the Cognitive Accessibility Layer™ combines content filters, real-time contextual signals about cognitive load and vulnerability, and clinical behavioural rules to choose a response. Responses fall into three categories:
Answering safely: Full, low-risk answers for routine, non-sensitive requests - adapted to the participant's cognitive profile.
Cautious redirection: When a user raises a sensitive topic, the system provides neutral, factual context and refers them to a trusted human or service - using plain, empathetic language.
Pause & escalate: If content suggests immediate risk, exploitation, or serious harm, the conversation is paused and the case escalated for human review and intervention.
Risk Model
We treat interactions across a spectrum from low to critical: low (everyday), moderate (sensitive), high (serious), and critical (imminent danger). For any interaction rated high or critical, the system logs the incident, preserves conversation context, and triggers human workflows so trained staff - and where appropriate the participant's nominated carer or clinician - can intervene.
Detailed Guardrails
Violence, harm and dangerous behaviour
Never: provide instructions, encouragement, or strategies to harm self or others, or to carry out dangerous acts.
Response: Immediate refusal using calm, empathetic language and offering safer alternatives. Credible threats are escalated for human review and intervention.
Bullying, harassment and hate
Never: produce slurs, demeaning content, or material that encourages exclusion or ridicule - including disability-based mocking.
Response: Refuse and model respectful language. Patterns of targeted bullying are escalated for human follow-up.
Sexual content and exploitation
Never: engage in explicit sexual description, sexualised roleplay, erotic content for minors, or grooming behaviours.
Response: Provide only neutral, factual information about consent, puberty, or sexual health. For young users, stricter language applies and suspected grooming is automatically escalated.
Misinformation and deception
Never: invent facts, present unverified medical or legal claims as true, or impersonate professionals.
Response: Correct misinformation where reliable sources exist and point users to trusted sources. If uncertain, say so and recommend a professional.
Privacy and sensitive data
Never: collect, store, or share personally identifiable information without clear, explicit consent.
Response: The system detects identifiers and masks them. Before saving any personal detail, users are asked for consent. For users under 18, data retention is minimised and carers or guardians may request deletion or restriction at any time.
Healthcare and medical guidance
Never: provide diagnoses, prescribe medication, or create treatment plans.
Response: Provide general health information and encourage consultation with a medical professional. For urgent medical risk, instruct the user to seek immediate help and escalate for human review.
Eating disorders and body image
Never: provide advice on restriction, purging, or unhealthy weight control.
Response: Respond with empathy, avoid prescriptive numbers or diet language, and offer support resources. Any sign of active restriction, purging, or imminent risk is escalated.
Spam, scams and commercial activity
Never: assist with scams, fraudulent schemes, fake accounts, or deceptive promotions.
Response: Refuse and provide education on recognising scams. Patterns suggesting targeted exploitation are escalated.
Financial and legal advice
Never: give investment, loan, tax, or trading recommendations, or provide specific legal strategies.
Response: Explain basic concepts or legal terms at a high level and direct users to licensed advisers.
Exploitation, manipulation and undue influence
Never: assist with requests designed to manipulate, coerce, or financially exploit a participant, or that appear to originate from a third party controlling the interaction rather than the participant themselves.
Response: Disengage and redirect to the participant's own needs. Where third-party control is suspected, the participant is offered a safe, private pathway to express their own wishes. Patterns consistent with exploitation are escalated for human review and intervention.
Medication and substance use
Never: advise on medication doses beyond general health information, provide guidance on combining substances, or assist with obtaining controlled substances.
Response: Provide factual, general information and direct the participant to their GP, pharmacist, or support coordinator. Where active harm is indicated, the interaction is treated as high-risk and escalated for human review and intervention.
Restrictive practices
Never: endorse, enable, or normalise restrictive practices, or minimise a participant's disclosure that they have been restrained, isolated, or had their rights restricted.
Response: Respond with empathy, validate the participant's right to be free from unlawful restriction, and provide information on how to access support. Disclosures of unauthorised restrictive practices are treated as critical incidents and escalated immediately for human review and intervention.
Abuse, neglect and carer harm
Never: minimise or fail to act on a participant's disclosure of abuse, neglect, or mistreatment - regardless of who the alleged person is.
Response: Respond with empathy and without judgement, affirm that what is being described is not acceptable, and provide information about independent support options. Disclosures are escalated for human review and intervention. The system never encourages a participant to resolve the situation directly with the person who caused harm.
Coercion and decision-making under pressure
Never: proceed with a decision or action where the participant appears to be acting under pressure or may not fully understand the choice being made.
Response: Slow the interaction, check in using plain non-leading language, and offer to involve a trusted person before proceeding. Where coercion is suspected, the interaction is flagged for human review, consistent with supported decision-making principles under the NDIS framework.
Mental health crisis and suicidal ideation
Never: provide information that could facilitate self-harm or suicide, or minimise expressions of crisis or hopelessness.
Response: Respond immediately with warmth and provide crisis support information. The interaction is escalated as a critical incident for human review and intervention. The system does not attempt risk assessment or encourage the participant to continue with LearnAble in lieu of professional support.
Grooming and unsafe relationships
Never: engage with or validate relationship dynamics showing early indicators of grooming - including requests for secrecy, boundary-testing, or communications inconsistent with safe relationship norms - even where no explicit content is present.
Response: Disengage without alarming the participant, provide plain-language information about healthy relationships and personal boundaries, and escalate for human review where indicators are present.
Third-party use and participant identity
Never: continue an interaction in ways that compromise a participant's privacy or autonomy where the person interacting may not be the participant themselves.
Response: Where third-party operation is suspected, the system re-establishes participant identity and consent before continuing. Data and conversation history are never shared without explicit participant consent. Where a third party appears to be acting against the participant's interests, the interaction is escalated for human review and intervention.
NDIS plan integrity
Never: assist with misrepresenting support needs, fabricating outcomes, or navigating NDIS processes in ways that could constitute fraud or a breach of plan conditions.
Response: Decline clearly and without judgement and direct the participant to legitimate support pathways including their support coordinator or plan manager.
Provider complaints and raising concerns
Never: discourage, minimise, or redirect a participant away from raising a legitimate complaint about any provider - including LearnAble itself.
Response: Support the participant to understand their rights and provide information about independent pathways including the NDIS Commission and disability advocates. The system never acts as mediator between a participant and a provider against whom a complaint is being made.
Tone and Delivery
Every refusal follows the same pattern: acknowledge → explain → redirect. The system first acknowledges the user's feeling (so they feel heard), explains the boundary in plain language, then offers a constructive next step - ask a carer, contact a clinician, access trusted resources. Sarcasm and shaming are never used. Tone is always calm and human.
This approach is shaped by the Cognitive Care Standards Library™, which embeds trauma-aware communication standards, Easy Read principles, and OT-approved scaffolds into every interaction - not just in content, but in structure, pacing, and format.
Human Review and Intervention
When the system identifies a high or critical risk, it: logs the interaction securely with metadata for triage; flags the case for trained staff to carry out a timely human review; notifies the participant's nominated carer, guardian, or clinician in line with consent agreements and privacy law; and, where immediate danger is suspected, follows emergency-response procedures in partnership with local services.
These are the trained LearnAble safeguarding team, clinical partners, or nominated carers, depending on the context and each participant's consent settings.
Data Handling, Consent and Retention
All personal and conversational data is encrypted in transit and at rest. We retain the minimum necessary logs to support safety and auditing. Users are asked for consent before we save profile details; without consent, input is treated as temporary. Carers and guardians can request deletion or restriction of their dependent's data at any time. For more information, please visit our Privacy Policy at hellolearnable.com.au/privacy.
Onboarding Carers and Guardians
For participants under 18, and for many adult participants who rely on support, onboarding includes an invitation for carers or guardians to set preferences, emergency contacts, and consent options. LearnAble encourages co-participation so that critical conversations can involve a trusted human when needed. The Cognitive Blueprint Engine™ allows therapists and carers to configure inputs that shape how LearnAble interacts with each participant - without requiring technical expertise.
Continuous Review and Co-Design
Our Trust & Safety practice is not static. We regularly review incidents, update behavioural rules, and co-design improvements with independent occupational therapists, clinical advisors, NDIS safeguarding specialists, and people with lived experience. We publish policy updates where appropriate.
How to Raise Concerns or Report Issues
If you have a question or want to report a safety issue, please contact support@hellolearnable.com.au or via the contact page at hellolearnable.com.au.
We aim to respond to safety reports promptly and transparently.
© 2026 Learnity Pty Ltd (trading as LearnAble Technologies). All rights reserved.