Ethical AI & Responsible Data Use Policy
Version: 1.0 (February 2026)
Next Review: August 2026
Company: Learnity Pty Ltd trading as LearnAble Technologies
(Easy Read version available here)
Our Purpose
LearnAble is the world's first AI platform created specifically to support people with cognitive disability, neurodivergence, and complex cognitive needs. We believe technology should enhance human care - never replace it.
We exist to create Cognitive Mobility™: removing the cognitive barriers that prevent people from accessing the digital world independently. Our platform does this through a proprietary Cognitive Accessibility Layer™ that sits between the participant and the underlying AI, ensuring every interaction is safe, ethical, and adapted to each person's way of thinking.
This policy explains how we design, test, and govern the use of artificial intelligence at LearnAble. It should be read together with our Privacy Policy (hellolearnable.com.au/privacy), Terms & Conditions (hellolearnable.com.au/terms), and Safety & Safeguards Policy (hellolearnable.com.au/safety), which are each incorporated by reference into these documents.
Governing Standards
World Health Organization (WHO) - Ethics & Governance of Artificial Intelligence for Health (2021)
FUTURE-AI - Principles for Trustworthy AI in Health
NDIS AI-Enabled Assistive Technology Framework
Australian Privacy Act 1988 (Cth) and the Australian Privacy Principles (APPs), including Privacy and Other Legislation Amendment Act 2024
National AI Centre - Guidance for AI Adoption (AI6), October 2025
Australia's National AI Plan, December 2025
Australian Government - Safe and Responsible AI in Healthcare Review (DoHDA, March 2025)
TGA - Clarifying and Strengthening the Regulation of Medical Device Software including AI (July 2025)
AHPRA - Guidance on safe and professional use of AI in clinical practice
OAIC - Guidance on privacy and AI, October 2024
1. The Cognitive Accessibility Layer™: Our Ethical Architecture
Most AI platforms connect users directly to a large language model (LLM). LearnAble does not. We have built a Cognitive Accessibility Layer™ that sits between every participant and the underlying LLM. This is not a feature - it is the core ethical and technical infrastructure of our platform.
What the Layer Does
The Cognitive Accessibility Layer™ performs the following functions before any LLM output reaches a participant:
Reduces cognitive load - filters noise, removes unnecessary complexity, and prevents overwhelm before it occurs
Structures information - converts abstract or dense content into clear, sequential, and comprehensible formats
Regulates pace - adapts interaction speed to match each person's cognitive capacity, rather than forcing them to keep up
Contextual adaptation - accounts for fatigue, stress, disability type, processing differences, and situational cognitive load
Comprehension before action - verifies understanding before prompting decisions, rather than assuming it
Autonomy preservation - guides and prompts without taking control or creating unhealthy reliance
Safety filtering - screens LLM outputs for content that may be inappropriate, harmful, or misaligned for participants with cognitive vulnerability
Why This Matters Ethically
The Layer is the primary mechanism by which LearnAble upholds its ethical commitments. Without it, participants with cognitive disability would interact with a general-purpose AI system designed for neurotypical users - one that assumes uniform cognitive capacity, moves at pace, and does not account for the ways cognition can fail under stress, fatigue, or complexity.
The Layer is also what distinguishes LearnAble from a generic AI assistant. It is built on clinical occupational therapy reasoning, disability and NDIS system knowledge, and a deep understanding of when support becomes control and when help becomes dependence. These distinctions are not programmable features - they are embedded in the Layer's design logic, developed in partnership with our co-founder, Kezia Kingston, and our clinical team.
What the Layer Is Not
The Cognitive Accessibility Layer™ is not a therapeutic tool, clinical diagnostic instrument, or substitute for professional clinical judgement. It does not qualify as a medical device under the Therapeutic Goods Act 1989 (Cth). All clinical decisions remain with qualified practitioners and participants' support networks.
2. People First
Accessible and Informed Consent
We ensure every person understands when and how AI is used in their support. Information is provided in plain language and accessible formats where needed. Guardians, carers, and trusted supporters are included in consent processes. Consent can be withdrawn at any time without affecting a participant's access to support.
Respect and Dignity
Participants always remain in control of their choices, their data, and their support. The Cognitive Accessibility Layer™ guides and supports - it does not override, direct, or make decisions on a participant's behalf. AI will never make clinical or personal decisions autonomously.
Supporting Decision-Making, Not Replacing It
LearnAble recognises that many participants have supported decision-making arrangements under the NDIS or applicable guardianship legislation. Our platform is designed to support a participant's own decision-making capacity, not substitute for it. Where a participant has a legal guardian or appointed representative, we ensure all relevant consents and access authorisations are obtained through those appropriate channels.
Co-Design in Everything We Do
LearnAble is built with the people it serves. Participants, carers, and clinicians contribute to every stage of design - from initial research through to real-world use and ongoing improvement. We do not design for our users; we design with them.
3. Our Core Ethical Principles
Our approach combines the WHO's six ethics principles for AI in health with LearnAble's own values, operationalised through the Cognitive Accessibility Layer™.
Human Autonomy: People remain in control. The Cognitive Accessibility Layer supports decisions; it never makes them.
Wellbeing & Safety: We test thoroughly, monitor continuously, and act immediately on any safety concern.
Transparency: We explain what our AI does and why, in language anyone can understand.
Accountability: We take responsibility for how AI performs and how it affects participants.
Inclusiveness & Fairness: We design for all users, actively identifying and reducing bias and cognitive barriers.
Responsiveness & Sustainability: We keep improving as technology evolves, users' needs change, and regulation develops.
4. Cognitive Safety
Because LearnAble serves people with cognitive disability and neurodivergence, we recognise a distinct category of risk that standard AI governance frameworks do not address: cognitive safety.
Key Cognitive Safety Commitments
Dependency prevention - the Cognitive Accessibility Layer™ is specifically designed to build independence, not reliance. We monitor for signs that a participant is becoming dependent on AI support in ways that undermine their capacity development goals, and we design off-ramps that progressively support capability-building.
Capacity-appropriate communication - all AI outputs are adapted to the participant's comprehension level, communication style, and cognitive profile. We do not use generic language that assumes uniform literacy or processing capacity.
Distress and confusion recognition - the Layer is designed to detect signals that a participant may be confused, distressed, or cognitively overwhelmed, and to trigger escalation to a human carer or clinician when those signals are present.
Complexity limits - the Layer applies hard limits on the complexity, length, and pace of AI-generated content, preventing the underlying LLM from producing outputs that may exceed a participant's safe processing capacity.
No unsolicited prompting - LearnAble does not use engagement-optimisation techniques (such as push notifications, behavioural nudges, or habit-forming interactions) that could exploit cognitive vulnerability.
5. Automated Decision-Making Transparency
In accordance with the Privacy and Other Legislation Amendment Act 2024 (Cth) - specifically the new APP 1.7–1.9 obligations commencing 10 December 2026 - LearnAble discloses its use of automated and AI-assisted decision-making that may significantly affect participants' rights or interests.
How AI Assists Decisions in LearnAble
The Cognitive Accessibility Layer™ and underlying LLM assist - but do not make - decisions in the following areas:
Adapting communication style and content complexity to a participant's cognitive profile
Recommending task sequences, prompts, and support strategies based on a participant's goals and patterns
Flagging potential safety concerns or signs of distress for human review
Personalising daily routines and interaction flows
All recommendations generated by the AI are reviewed by qualified clinicians and support coordinators at regular intervals. No AI output automatically modifies a participant's support plan without human clinical authorisation.
Your Rights in Relation to Automated Decisions
Participants and their guardians have the right to:
Understand how AI is used in their support
Request a human review of any AI-assisted recommendation that affects their rights or support arrangements
Withdraw consent for AI-assisted support at any time
Receive explanations of AI outputs in plain language, through their preferred communication format
Requests for human review or explanation can be made by contacting support@hellolearnable.com.au.
6. Third-Party AI Provider Governance
LearnAble uses third-party large language model (LLM) providers as part of its technical architecture. The Cognitive Accessibility Layer™ mediates all interactions between participants and these underlying models. We maintain rigorous governance of our AI supply chain.
Our Commitments Regarding Third-Party AI
Participant data is never used to train or fine-tune the underlying LLM without explicit, separately obtained consent.
All third-party AI providers are assessed against our data protection requirements, including data residency, sub-processing obligations, and breach notification timelines.
Data processing agreements are in place with all AI providers that handle participant personal information.
We do not disclose participant identities to third-party AI providers. The Layer manages personalisation within LearnAble's systems, not within the LLM itself.
We conduct annual vendor reviews to assess changes in third-party AI providers' practices, terms, and data handling.
Current third-party AI providers and their data handling practices are disclosed in our Privacy Policy (hellolearnable.com.au/privacy). Participants may request further information about specific providers by contacting us at support@hellolearnable.com.au.
7. Human Oversight & AI Incident Response
Meaningful Human-in-the-Loop
LearnAble is designed around the principle that human oversight must be meaningful, not nominal. We do not rely on AI to make consequential decisions about participants, and we define 'meaningful oversight' as follows:
Treating clinicians and support coordinators review AI-generated recommendations at regular intervals (no less than every six weeks for active participants)
The Cognitive Accessibility Layer™ automatically escalates to a human when it detects distress, safety concerns, or responses that fall outside defined safety parameters
No participant's support plan, funding recommendations, or daily routine structure is modified by AI without clinical authorisation
Safety escalation pathways are live at all times and tested quarterly
AI Incident Classification and Response
LearnAble distinguishes between data breach incidents (governed by the Notifiable Data Breach Scheme) and AI harm incidents, which are distinct in nature and require a separate response process.
An AI harm incident includes - but is not limited to - an AI response that causes or contributes to participant distress, incorrect or harmful guidance, failure to escalate a safety concern, or a systematic bias identified in AI outputs.
Our AI incident response process includes:
Immediate withdrawal of the relevant AI output or interaction from active use
Notification to the affected participant and/or their guardian or representative
Escalation to the clinical oversight team within 24 hours
Root cause analysis and documentation within 5 business days
Remediation and, where required, system reconfiguration
Review at the next scheduled Ethics and Risk Committee meeting
We maintain an AI Incident Register, updated in real time, and reviewed at each formal governance cycle.
8. How We Build Trustworthy AI
We follow the FUTURE-AI framework and the National AI Centre's Guidance for AI Adoption (AI6, October 2025) to ensure LearnAble remains safe, fair, and effective:
Fairness: we detect and correct bias in data and outputs, with specific attention to biases affecting people with disability, neurodivergence, and cognitive differences
Universality: we test with diverse participants across disability type, age, communication style, and cultural background
Traceability: we maintain complete records of model versions, data sources, Layer configurations, and system updates
Usability: our design prioritises accessibility for people with cognitive challenges, co-designed with participants and occupational therapists
Robustness: we test in real-world conditions, including low-bandwidth environments and shared device contexts, with strong fallback systems
Explainability: we provide clear, human-friendly explanations for all AI outputs, in accessible formats appropriate for the participant
9. Aligned with the NDIS
LearnAble aligns with the NDIS AI-Enabled Assistive Technology Framework, the NDIS Practice Standards, and the NDIS Code of Conduct. Our approach ensures:
Quality and User Experience - co-designed with participants, carers, and clinicians at every stage
Demonstrated Value - measured through support hour reduction, independence outcomes, and validated clinical metrics
Safety - continuous monitoring, safety escalation procedures, and real-time incident response
Privacy - encryption, access controls, Australian data residency, and full compliance with OAIC standards including the new Privacy and Other Legislation Amendment Act 2024
Human Rights - every participant's dignity, choice, and control are respected and actively protected at all times
10. Protecting Your Data
We treat personal information with the same care we would want for ourselves.
We comply with the Privacy Act 1988 (Cth), the Australian Privacy Principles, the Privacy and Other Legislation Amendment Act 2024, and NDIS Practice Standards
Sensitive data is encrypted, access-controlled, and stored securely in Australia
We never sell, licence, or use participant data for advertising, profiling, or purposes unrelated to the participant's own support
We follow the Notifiable Data Breach Scheme and maintain documented breach response procedures
We will update our Privacy Policy before 10 December 2026 to include all required automated decision-making disclosures under APP 1.7–1.9
The new statutory tort for serious invasions of privacy (in force since 10 June 2025) reflects rights we have always intended to uphold; participants may exercise these rights through our complaints process or directly through the courts
For complete details on how we collect, store, and protect personal information, please see our Privacy Policy at hellolearnable.com.au/privacy.
11. Responsibility and Oversight
We assign clear responsibilities across all areas of AI governance:
Clinical Oversight - Kezia Kingston (Co-Founder & Occupational Therapist) holds clinical responsibility for the Cognitive Accessibility Layer's therapeutic boundaries and participant safety
Technology & AI Governance - Lee Hunter (Co-Founder) holds responsibility for technology architecture, third-party AI provider governance, and regulatory compliance
Ethics & Privacy Lead - responsible for bias monitoring, incident review, privacy compliance, and participant advocacy
Human Oversight - the Layer is designed so that AI cannot take consequential action affecting a participant without human clinical review where safety is at stake
Risk Register - maintained and reviewed at each governance cycle to identify and address emerging issues
Transparency Reporting - we publish plain-language summaries of how our AI systems work, their performance, and known limitations
12. Transparency and Community Trust
We believe trust comes from openness. LearnAble will:
Publish accessible plain-language summaries of how our AI systems work ('model cards'), including the Cognitive Accessibility Layer's design logic
Clearly disclose when participants are interacting with AI-generated content
Provide accessible contact channels for questions, feedback, and complaints
Engage regularly with participants, carers, clinicians, disability advocacy organisations, and community partners
Proactively engage with the NDIS Quality and Safeguards Commission, the OAIC, and the new AI Safety Institute as the regulatory landscape evolves
13. Continuous Improvement
AI is not static - and neither are we. We will:
Monitor for bias, errors, and performance changes ('model drift') on a continuous basis
Re-validate the Cognitive Accessibility Layer after major updates, new use cases, or participant population changes
Conduct annual independent ethics reviews against evolving Australian and international standards
Proactively prepare Privacy Policy and governance documentation to meet the APP 1.7–1.9 automated decision-making obligations ahead of the December 2026 deadline
Review third-party AI provider arrangements annually for changes in data practices, terms, or regulatory compliance
14. Our Public Commitments
We commit that:
AI will never replace human care, consent, or clinical judgement.
The Cognitive Accessibility Layer™ will always stand between participants and the underlying AI, ensuring safe, adapted, and ethically governed interactions.
All technology will be co-designed with people with cognitive disability and their support networks.
Bias and safety results will be reviewed and published in accessible formats.
We will comply with Australian privacy law, OAIC guidance, NDIS standards, and the AI6 framework.
Transparency, traceability, and accountability will always guide our work.
We will keep this policy current as law, standards, and participant needs evolve.
15. Regulatory Scope Clarification
LearnAble's platform is an assistive technology solution governed by the NDIS AI-Enabled Assistive Technology Framework. It is not classified as a therapeutic good or medical device under the Therapeutic Goods Act 1989 (Cth). The platform does not provide clinical diagnosis, prescribe treatment, or generate clinical assessments. All clinical decisions are made by qualified practitioners in accordance with their professional obligations under AHPRA and relevant codes of conduct.
We actively monitor the TGA's evolving position on AI-enabled software to ensure ongoing regulatory alignment as the framework develops.
16. Keeping This Policy Up to Date
This policy is reviewed every 6 months, or sooner if laws or standards change, technology advances, or participants' needs evolve. The latest version will always be available at hellolearnable.com.au/ethics.
17. Contact Us
If you have questions or feedback about LearnAble's approach to ethical AI or data use, please contact us at support@hellolearnable.com.au or visit hellolearnable.com.au/ethics.
© 2026 Learnity Pty Ltd (trading as LearnAble Technologies). All rights reserved.