Ethical AI & Responsible Data Use Policy

Effective Date: October 2025

Next Review: April 2026

Our Purpose

LearnAble is the world’s first AI platform created to support people with cognitive disability.

We believe technology should enhance human care, never replace it.

This policy explains how we design, test, and use artificial intelligence (AI) in ways that are ethical, transparent, and centred on people.

This policy should be read together with LearnAble’s Privacy Policy and Terms & Conditions.

While those documents explain how we collect, store, and use personal information and outline user rights and responsibilities, this policy focuses specifically on the ethical design and use of AI within LearnAble.

It builds on our Privacy Policy and Terms & Conditions, and aligns with trusted global and national standards, including:

  • World Health Organization (WHO)Ethics & Governance of Artificial Intelligence for Health

  • FUTURE-AIPrinciples for Trustworthy AI in Health

  • NDIS AI-Enabled Assistive Technology Framework

  • Australian Privacy Act 1988 and the Australian Privacy Principles (APPs)

  • AHPRA guidance on safe and professional use of AI

1. People First

Accessible and Informed Consent

We make sure every person understands when and how AI is used. Information is provided in plain language and with visuals where needed. Guardians or trusted supporters are included in decisions, and consent can be withdrawn at any time.

Respect and Dignity

Participants always remain in control of their choices, data, and support. AI will never make clinical or personal decisions on its own.

Co-Design in Everything We Do

LearnAble is built with the people it serves. Participants, carers, and clinicians help shape every stage - from research to real-world use.

2. Our Core Ethical Principles

Our approach combines the WHO’s six ethics principles with LearnAble’s own values:

  1. Human Autonomy: People remain in control; AI supports, not replaces.

  2. Wellbeing & Safety: We test thoroughly and act quickly on any safety concern.

  3. Transparency: We explain what our AI does and why, in language anyone can understand.

  4. Accountability: We take responsibility for how AI performs and how it affects people.

  5. Inclusiveness & Fairness: We design for all users, actively reducing bias and barriers.

  6. Responsiveness & Sustainability: We keep improving as technology and user needs evolve.

3. How We Build Trustworthy AI

We follow the FUTURE-AI framework to ensure LearnAble remains safe, fair, and effective:

  • Fairness: We detect and correct bias in data and outputs.

  • Universality: We test with diverse users and devices.

  • Traceability: We keep full records of model versions, data sources, and updates.

  • Usability: Our design prioritises accessibility for people with cognitive challenges.

  • Robustness: We test in real-world conditions with strong backup systems.

  • Explainability: We give clear, human-friendly explanations for all AI actions.

4. Aligned with the NDIS

LearnAble aligns with the NDIS AI-Enabled Assistive Technology Framework by ensuring:

  • Quality and User Experience: Co-designed with participants and carers.

  • Value: Demonstrated improvements in independence and daily life.

  • Safety: Continuous monitoring and red-flag escalation procedures.

  • Privacy: Encryption, secure data handling, and full compliance with OAIC standards.

  • Human Rights: Every person’s dignity, choice, and control are respected at all times.

5. Protecting Your Data

We treat your personal information with the same care we’d expect for our own.

  • We comply with the Privacy Act 1988, APPs, and NDIS Practice Standards.

  • Sensitive data is encrypted, access-controlled, and stored securely in Australia.

  • We never sell or use participant data for advertising or unrelated purposes.

  • We follow the Notifiable Data Breach Scheme and have clear response procedures in place.

For full details of how we collect, store, and protect your personal information, please see our Privacy Policy.

6. Responsibility and Oversight

We assign clear responsibilities for every area of AI governance:

  • Human Oversight: AI never acts without human review where safety matters.

  • Ethics & Privacy Leads: Named owners oversee bias, safety, privacy, and advocacy.

  • Risk Register: Updated regularly to identify and address potential issues early.

  • Transparency Reports: We openly publish information about how our AI works and performs.

7. Transparency and Community Trust

We believe trust comes from openness.

LearnAble will:

  • Publish summaries of how our AI systems work (“model cards”).

  • Clearly disclose when users are interacting with AI.

  • Provide contact channels for questions or feedback.

  • Engage regularly with participants, carers, clinicians, and community partners.

8. Continuous Improvement

AI is not static - and neither are we.

We:

  • Monitor for bias, errors, and performance changes (“model drift”).

  • Re-validate systems after major updates or new use cases.

  • Conduct annual independent ethics reviews.

  • Measure and reduce environmental impacts from computing resources.

9. Our Public Commitments

We commit that:

  • AI will never replace human care, consent, or clinical judgement.

  • All technology will be co-designed with people with cognitive disability.

  • Bias and safety results will be reviewed and published.

  • We will comply with Australian privacy law, OAIC guidance, and NDIS standards.

  • Transparency, traceability, and accountability will always guide our work.

10. Keeping This Policy Up to Date

This policy is reviewed every 6 months, or sooner if:

  • laws or standards change,

  • technology advances, or

  • participants’ needs evolve.

The latest version will always be available at hellolearnable.com.au/ethics.

Contact Us

If you have questions or feedback about LearnAble’s approach to ethical AI or data use, please contact us at: privacy@hellolearnable.com.au