TOScool
For any support or consultation you may need, feel free to reach out to us at info@rtoscool.com. We're here to help and look forward to assisting you!
Most RTOs know they should be conducting self-assurance reviews, but few have a structured method for examining evidence, practice, and risk across their organisation.
RTOScool provides structured self-assurance reviews that translate regulatory expectations into clear organisational insight.
This tool supports internal self-assurance and risk awareness. It does not determine regulatory compliance or replace formal regulatory review
______________________________________________________________________________________
What this tool does
This tool runs a structured self-assurance review on selected areas of your organisation.
Each review considers the five questions you answer, the document information you copy and paste into to information box provided and how arrangements operate in practice, producing a dated summary report and checklist for internal use.
This is a structured review process that produces outputs specific to your organisation — not generic guidance or template content
How the review is used
Each review is:
-
scoped to one stage of the learner lifecycle, and
-
run specifically for your organisation, on the date you complete it.
The review:
-
does not assess compliance,
-
does not verify evidence, and
-
does not issue audit findings or recommendations.
It is designed to help you understand where systems appear established and where potential risk exposure commonly arises, based on the information you provide.
The outputs are intended for internal use only and remain under your control.Your responses are not shared with any regulatory body or third party
Who this review is designed for
This review supports internal self-assurance and compliance for RTO and CRICOS providers, helping identify where arrangements may need attention before an audit.
This review is designed for people in leadership and compliance roles who oversee training and assessment.
Not suitable for schools, universities, higher education, or unregistered organisations.
​​
How your review works
-
Select a lifecycle area — each review focuses on one stage of the student journey.
-
Answer five self-assurance questions about how this area operates in practice.
-
Copy and paste relevant excerpts from your policies, procedures or other documents into the box provided — the more context you provide, the more useful your report will be.
-
Enter your organisation details — your name, position, RTO or CRICOS number and email address.
-
Submit your review — your responses are analysed using a structured framework built on real audit methodology.
-
Receive your report and invoice typically within 48 hours — a dated self-assurance summary and actionable checklist delivered to your inbox.
-
Pay your invoice within 14 days by bank transfer
Self-Assurance review pricing
-
Price : AU$100 per review area (GST not applicable).
-
Transparency: Total pricing is displayed upfront .
-
Discounts: Apply codes at the submission (one per review, one-time use unless otherwise specified).
-
Delivery: Typically within 2 business days (subject to review complexity).
-
Payment: Invoice issued with your report, 14-day terms.
-
Value: Professional-grade self-assurance without the consultant price tag. Fixed pricing protects your margins, while our expert review help safeguard your RTO registration.
How this review is generated
Every RTOScool review is built on a structured self-assurance framework developed by a former ASQA auditor with direct experience conducting regulatory audits and evaluating evidence against the Standards for RTOs.
​
That methodology has been encoded into the review engine. The questions you answer, the way responses are analysed, and the structure of the final report reflect how regulatory evidence is interpreted and how risks are identified.
​
The framework evaluates how evidence aligns across policies, procedures, and practice. By examining how different sources of evidence support or contradict each other, the review identifies areas where risks may arise from, gaps in implementation, consistency, or documentation.
​
The purpose of the review is to support internal self-assurance — helping organisations examine their own systems, evidence, and practices against regulatory expectations.
​
The result is a dated, structured report scoped specifically to your organisation that can be used for internal review, governance discussions, and ongoing quality assurance.
​
Where appropriate, organisations may also choose to present the report as supporting evidence during regulatory audit.
Before you begin
Before completing your review, take a few minutes to gather relevant documents and think through how each area actually operates in your organisation — not how it should operate on paper, but how it works in practice. The more honest and specific your responses, the more useful your report will be.
RTO & CRICOS Student Lifecycle (the seven stages)

Marketing & Recruitment
What this review examines
This review examines how your marketing and recruitment arrangements operate in practice, including how information is developed, applied, and kept current. Form guidance for next step Q1 - STRATEGY: How are Responsibilities and accountabilities for this area defined and managed? Consider these questions: Who actually makes the final decision on marketing content before it goes public? If something misleading gets published, who is held accountable? How do you know your recruitment staff understand what they can and can't promise to students? Who checks that third parties (agents, partners) aren't making promises you can't keep? What happens when the person responsible is unavailable - who steps in and how do they know what to do? How are these responsibilities documented so there's no confusion? Q2 - IMPLEMENTATION: How are arrangements for this area implemented? Consider these questions: Walk through your actual process from the moment a prospective student first contacts you until they're enrolled - what are the specific steps? What information MUST you give students before they sign anything? How do you ensure this actually happens every time? How do you verify that students genuinely understand what they're signing up for? What's your process for keeping marketing materials current when courses or regulations change? How do you handle situations where a student's expectations don't match what you can deliver? What systems or tools do you use to manage this process? Q3 - EVIDENCE: How are arrangements for this area implemented and applied in day-to-day operations? Consider these questions: Can you show actual examples of what students receive before they enrol? Looking at your recent enrolments, what evidence exists that students were given accurate information? If we reviewed your website, social media, and brochures right now, would they all tell the same story? What do your enrolment records actually show about the information provided to students? Can you trace back from a completed enrolment to prove what that student was told? How do staff actually use your processes in their daily work - is there a gap between the procedure and reality? Q4 - MONITORING: How does the organisation monitor whether arrangements for this area are effective? Consider these questions: How do you know if your marketing is actually attracting the right students? What tells you whether students understood what they were enrolling in? When students withdraw early or complain, do you analyze whether marketing/recruitment played a role? Who reviews your marketing materials and how often? What triggers a review? What data do you collect that would reveal problems in this area? How do you know your staff are following the processes consistently? What would alert you to a problem before it becomes serious? Q5 - IMPROVEMENT: What risks have been identified for this area, and how are they managed? Consider these questions: What could realistically go wrong in your marketing and recruitment? Be specific. Have you ever had students complain that the course wasn't what they expected? What did you learn? What prevents your staff from making promises you can't deliver? How do you catch misleading information before students see it? What improvements have you actually made in the last 12 months based on issues or near-misses? If a staff member left tomorrow, what knowledge or practices might leave with them? What's your biggest vulnerability in this area right now? EXAMPLE INFORMATION TO INCLUDE: ✓ Marketing materials (brochures, website screenshots, advertisements) ✓ Enrolment forms and procedures (blank templates, not completed forms) ✓ Pre-enrolment information provided to students (template/sample) ✓ Position descriptions for marketing/recruitment staff (remove staff names) ✓ Evidence of information provided to students (de-identified examples)

Pre-Enrolment Assessment
What this review examines
This review examines how you assess students before they enrol to ensure they can successfully complete the training. This includes language, literacy, numeracy (LLN) assessments, and checking prerequisite skills or knowledge. Form guidance for next step Q1 - STRATEGY: How are Responsibilities and accountabilities for this area defined and managed? Consider these questions: Who is actually qualified to assess whether a student is ready for your courses? What expertise or training do they have that makes them capable of making these judgments? When someone says a student isn't ready, who can override that decision and on what basis? If a student fails despite passing pre-enrolment assessment, who's accountable for that assessment decision? How do you ensure the person doing assessments isn't pressured to pass everyone? What happens if assessment staff are inconsistent in their decisions? Q2 - IMPLEMENTATION: How are arrangements for this area implemented? Consider these questions: At exactly what point in the enrolment process do you assess students - before they pay, after, or when? What specific tools or methods do you use and why did you choose them? How do you explain to students what the assessment is for and what happens based on results? What's your process when someone's skills are borderline - what are the decision criteria? If a student doesn't meet requirements, what are their actual options and how do you present them? How is assessment information recorded and who can access it? What happens if a student refuses to be assessed? Q3 - EVIDENCE: How are arrangements for this area implemented and applied in day-to-day operations? Consider these questions: Looking at your recent enrolments, can you show that every single student was assessed? What does a completed assessment actually look like in your records? Can you demonstrate how assessment results influenced what happened next for specific students? Are there examples where you declined to enrol someone or directed them elsewhere based on assessment? How do you know your assessment tools actually predict student success? If we sampled student files, would we see consistent assessment practices or variation? Do students receive different support based on their assessment results? Prove it. Q4 - MONITORING: How does the organisation monitor whether arrangements for this area are effective? Consider these questions: How do you know if your assessments are accurate predictors of student success? Do you track completion rates based on assessment scores? What does that data tell you? Have you ever discovered students who shouldn't have been enrolled? How did you find out? What tells you whether students with identified needs are actually getting appropriate support? Who checks that assessments are being conducted properly and consistently? How do you know if your assessment tools are still appropriate for current courses? What feedback do students give about the assessment process? Q5 - IMPROVEMENT: What risks have been identified for this area, and how are they managed? Consider these questions: What happens if you enrol students who aren't capable of succeeding? What prevents assessment staff from just passing everyone to boost enrolment numbers? How do you prevent good students from being incorrectly assessed as not ready? What could make your assessments inaccurate or unfair? Have you ever changed your assessment approach? What prompted the change? What would happen if your assessment tools were found to be inadequate? What's the biggest gap in your current assessment process? EXAMPLE INFORMATION TO INCLUDE: ✓ Pre-enrolment assessment tools ( Blank templates) ✓ Assessment procedures and guidelines ✓ Examples of completed assessments (de-identified) ✓ Evidence of how results inform enrolment decisions ( de-identified) ✓ Position descriptions for assessment staff ( remove staff names)

Training and Assessment Design
What this review examines
This review examines how you design, plan, and deliver training and assessment. It assesses your training and assessment strategies, learning resources, trainer qualifications, and how you ensure quality delivery. Form guidance for next step Q1 - STRATEGY: How are Responsibilities and accountabilities for this area defined and managed? Consider these questions: Who actually designs your training programs and what qualifies them to do so? How do you verify that trainers are actually competent, not just qualified on paper? Who decides if training quality is acceptable and what standards are they measuring against? When training or assessment quality is poor, who is held responsible and what are the consequences? How do you ensure trainers understand what's expected of them? Who has the authority to stop a trainer from delivering if quality concerns arise? Q2 - IMPLEMENTATION: How are arrangements for this area implemented? Consider these questions: How do you actually decide what training to deliver, when, and for how long? What's your process for ensuring training matches industry current practice, not outdated methods? How do you determine if the amount of training is enough for students to become competent? What resources do students actually receive and how do you know they're sufficient? How do you ensure assessment tasks actually test the right skills and knowledge? What's your process for making sure assessments are fair to all students? How do trainers know what they must deliver and how to deliver it? Q3 - EVIDENCE: How are arrangements for this area implemented and applied in day-to-day operations? Consider these questions: Can you show that what you planned to deliver is what actually happened? Looking at trainer files, what evidence exists that they're currently competent and industry-current? Can you demonstrate that students actually achieved competency, not just attended sessions? What do completed student assessments actually show about their capability? Are learning resources being used as intended or are trainers creating their own materials? If we observed a training session, would it match your documented strategy? How do you know students are getting consistent quality across different trainers? Q4 - MONITORING: How does the organisation monitor whether arrangements for this area are effective? Consider these questions: How do you know if your training is actually working - what evidence tells you? What data do you analyze about training quality? What does it reveal? How do you identify trainers who need improvement before students complain? What tells you whether your assessments are too easy, too hard, or just right? How often do you review assessment results to identify patterns or problems? What feedback mechanisms exist and how do you act on what you learn? Who observes training delivery and what happens with their findings? Q5 - IMPROVEMENT: What risks have been identified for this area, and how are they managed? Consider these questions: What could cause training or assessment quality to deteriorate? How do you prevent trainers from taking shortcuts or delivering substandard training? What stops unqualified or incompetent people from training your students? What happens if industry practices change and your training becomes outdated? Have you ever had to intervene due to quality concerns? What triggered it and what changed? What improvements have you made based on student outcomes or feedback? What's your weakest point in training and assessment quality? EXAMPLE INFORMATION TO INCLUDE: ✓ Training and assessment strategies ✓ Trainer/assessor qualifications and CVs (remove personal contact details, addresses) ✓ Learning resources (student materials, workbooks - can use actual or samples) ✓ Assessment tools and marking guides (blank templates) ✓ Evidence of training delivery (de-identified attendance records, remove student names)

Assessment Validation & Industry Engagement
What this review examines
This review examines how you ensure your assessments are high quality and meet industry needs. It assesses your validation processes, industry consultation, and how you keep training current and relevant. Form guidance for next step Q1 - STRATEGY: How are Responsibilities and accountabilities for this area defined and managed? Consider these questions: Who is responsible for ensuring assessments are actually validated - and do they have the authority to enforce it? What qualifies your validators to judge assessment quality? Who maintains relationships with industry and how senior/credible are they? When industry says your training is outdated, who has the authority to make changes? How do you ensure validation isn't just a tick-box exercise done by the same people who created the assessments? Who is accountable if you lose touch with industry needs? Q2 - IMPLEMENTATION: How are arrangements for this area implemented? Consider these questions: What's your actual validation process - step by step, who does what and when? How do you decide which assessments to validate and how often? Who from industry actually participates in validation - what are their names, roles, and current industry involvement? How do you engage with industry beyond just asking them to validate assessments? What's your process for translating industry feedback into actual changes? How do trainers maintain genuine industry currency, not just attend a workshop once a year? What happens when validation identifies serious flaws? Q3 - EVIDENCE: How are arrangements for this area implemented and applied in day-to-day operations? Consider these questions: Can you show validation reports with real industry practitioner input, not just internal staff signing off? What evidence proves industry people actually reviewed assessments, not just had their names added to reports? Can you demonstrate how validation findings led to actual improvements? Looking at assessment tools, can you show different versions reflecting changes from validation? What evidence exists of genuine industry engagement - meeting minutes, emails, site visits? How do trainer files prove current industry engagement, not outdated experience from years ago? Are there examples where industry feedback fundamentally changed how you deliver training? Q4 - MONITORING: How does the organisation monitor whether arrangements for this area are effective? Consider these questions: How do you track whether validation is actually completed according to schedule? What tells you if validation is improving assessment quality or just going through the motions? How do you know industry feedback is being acted on, not filed away and forgotten? What data shows whether your training produces graduates who meet current industry standards? How do you verify trainers are maintaining industry currency - who checks and how? What would alert you if you're losing touch with industry needs? When was the last time industry feedback surprised you or challenged your assumptions? Q5 - IMPROVEMENT: What risks have been identified for this area, and how are they managed? Consider these questions: What happens if your assessments aren't validated or validation quality is poor? How do you prevent validation from becoming a paper exercise with no real scrutiny? What could cause you to lose contact with industry - and how do you prevent it? What stops training from becoming outdated as industry practices evolve? Have you ever discovered assessments were invalid or training was obsolete? What changed as a result? What improvements have come directly from validation findings or industry feedback? What's your biggest risk around staying current and relevant? EXAMPLE INFORMATION TO INCLUDE: ✓ Validation policy and schedule ✓ Validation reports (de-identified - can show validator roles/titles, remove personal contact details) ✓ Industry engagement records (meeting minutes with organization names only, remove individual contact details) ✓ Evidence of changes made from validation ✓ Trainer industry currency evidence (de-identified - show activities/qualifications, remove personal details) ✓ Assessment tool versions (before/after validation - blank templates)

Student Support and Wellbeing
What this review examines
This review examines how you support students throughout their training. It assesses support services, welfare arrangements, intervention for at-risk students, and how you create a safe and inclusive learning environment. Form guidance for next step Q1 - STRATEGY: How are Responsibilities and accountabilities for this area defined and managed? Consider these questions: Who is qualified and authorized to provide support to students with learning difficulties or personal issues? How do you ensure support staff know when to help and when to refer to external professionals? Who identifies students at risk of failing and who intervenes - are these the same people or different? When a student is struggling, who is accountable for ensuring they get help? How do trainers know their responsibilities for student support versus specialist support staff? What happens when support needs exceed your staff's capabilities? Q2 - IMPLEMENTATION: How are arrangements for this area implemented? Consider these questions: How do students actually find out about support services - not just "it's in the handbook" but how do they really learn about it? What's the actual process when a trainer notices a student struggling - what are the specific steps? How do students access support - is it genuinely accessible or are there barriers? What support can you actually provide versus what do you refer externally? How do you determine what reasonable adjustments to make for students with disabilities or additional needs? What happens when a student needs support but doesn't ask for it? How do you balance support with maintaining assessment integrity? Q3 - EVIDENCE: How are arrangements for this area implemented and applied in day-to-day operations? Consider these questions: Can you show evidence that students who needed support actually received it? Looking at students who withdrew or failed, what support was offered and when? What do support records actually show about who received help and what difference it made? Are there examples of reasonable adjustments you've made - and how did you decide they were appropriate? How do you know students are aware of support services - what evidence exists beyond "we told them at orientation"? Can you demonstrate the support pathway from identification through to intervention and outcome? Do student files show active support or just tick-boxes saying support was "offered"? Q4 - MONITORING: How does the organisation monitor whether arrangements for this area are effective? Consider these questions: How do you know if support is actually helping students succeed? What data do you collect on support service usage and student outcomes? Do you analyze who accesses support and who doesn't - and what that tells you? How do you identify students who need support but aren't getting it? What feedback do students give about support quality and accessibility? How do you track whether early intervention prevents student failure? What tells you if support staff have the right skills and resources? When students withdraw, do you investigate whether support could have made a difference? Q5 - IMPROVEMENT: What risks have been identified for this area, and how are they managed? Consider these questions: What happens when students don't access support even though they need it? How do you prevent students from falling through the cracks? What could cause support services to be inadequate or ineffective? How do you ensure support staff don't overstep their expertise into areas requiring qualified professionals? Have you had situations where lack of support contributed to student failure? What changed? What improvements have you made based on student feedback or outcomes data? What's the gap between the support you'd like to provide and what you can actually deliver? EXAMPLE INFORMATION TO INCLUDE: ✓ Student support policy and procedures ✓ Support service information and accessibility (brochures, website content) ✓ Records of support provided (MUST be de-identified - remove all student names and personal identifying information) ✓ At-risk student identification and intervention procedures (templates/blank forms) ✓ Examples of reasonable adjustments (de-identified case studies only) ✓ Staff qualifications for providing support (remove personal contact details)

Completion and Outcomes
What this review examines
This review examines how you support students to complete their training and achieve positive outcomes. It assesses completion rates, post-training outcomes, certification processes, and how you track graduate success. Form guidance for next step Q1 - STRATEGY: How are Responsibilities and accountabilities for this area defined and managed? Consider these questions: Who is responsible for tracking whether students actually complete their training? Who decides when a student has met all requirements and is ready for certification? Who is accountable for completion rate targets - and what are those targets? How do you ensure certificates are issued accurately and on time? Who tracks what happens to graduates after they leave - and who acts on that data? What happens when completion rates are poor - who is responsible for investigating and improving? Q2 - IMPLEMENTATION: How are arrangements for this area implemented? Consider these questions: What's your actual process for supporting students to complete - not just deliver training, but actively work toward completion? How do you identify students at risk of not completing and what intervention occurs? What happens in the final stages when a student is close to finishing - how do you ensure they cross the line? What's your certification process - from determining completion to issuing the certificate - how long does each step take and why? How do you track graduate outcomes - employment, further study - what's the actual process? What support or services do you offer after students complete? How do you respond when students are stuck on final assessment or missing one unit? Q3 - EVIDENCE: How are arrangements for this area implemented and applied in day-to-day operations? Consider these questions: What's your actual completion rate and how does it compare to similar organizations? Can you show examples of intervention that helped at-risk students complete? Looking at students who didn't complete, what were the reasons and could any have been prevented? What evidence exists of your certification timeframes - are they meeting your own standards? What do graduate outcome surveys actually tell you - and what's your response rate? Can you demonstrate examples where outcome data led to training improvements? Do you have evidence of students gaining employment or progressing in their careers? Q4 - MONITORING: How does the organisation monitor whether arrangements for this area are effective? Consider these questions: How do you track completion rates by course, cohort, or delivery mode - what patterns emerge? What data tells you why students don't complete - and how do you verify that data is accurate? How do you monitor certification processing times and identify bottlenecks? What systems track graduate outcomes beyond just sending a survey? How do you know if your completion support strategies are working? Do you benchmark your outcomes against industry or sector data? What early warning indicators tell you completion might become a problem? Q5 - IMPROVEMENT: What risks have been identified for this area, and how are they managed? Consider these questions: What causes students to fail to complete even though they were capable? How do you prevent students from getting stuck in the final stages indefinitely? What happens if completion rates drop significantly - what's your response plan? How do you ensure certificates aren't delayed, withheld inappropriately, or issued incorrectly? Have you improved completion rates over time? What specifically drove that improvement? What changes have you made based on graduate outcome data? What's your biggest completion challenge right now? EXAMPLE INFORMATION TO INCLUDE: ✓ Completion tracking records and reports (aggregated data only - no individual student names) ✓ Completion rate data by course/cohort (statistical summaries, no personal data) ✓ Certificate issuance procedures and timeframes (policy/process documents) ✓ Graduate outcome survey results (aggregated/anonymized data only) ✓ Evidence of completion support interventions (de-identified examples) ✓ Analysis of non-completion reasons (aggregated data, no student identification)

Governance and Infrastructure
What this review examines
This review examines your organization's quality systems, decision-making processes, resources, and infrastructure that support training delivery. It assesses how effectively your organization plans, allocates resources, and maintains systems to ensure consistent quality Form guidance for next step Q1 - STRATEGY: How are Responsibilities and accountabilities for this area defined and managed? Consider these questions: Who is responsible for ensuring the organization has appropriate systems and resources to deliver quality training? How are decisions made about what resources (staff, facilities, equipment, systems) are needed? Who has authority to approve resource allocation for training delivery? How do you ensure the right people are making decisions about training quality and capability? What happens when resource limitations impact training quality - who addresses this? How are quality management responsibilities distributed across the organization? Who is responsible for monitoring what actually happens in classrooms and training delivery? Q2 - IMPLEMENTATION: How are arrangements for this area implemented? Consider these questions: What systems do you use to manage training operations (student records, document control, scheduling)? How do you identify what facilities, equipment, and resources are needed for each course? What's your process for ensuring you have sufficient qualified staff to deliver training? How do you plan for changes in delivery volume, new courses, or regulatory requirements? What quality management processes operate to ensure consistent standards? How do you ensure policies and procedures are current and actually followed? What administrative systems support trainers and students? How does information from self-assurance activities reach decision-makers? What forums or meetings exist for discussing quality, issues, and improvements? Q3 - EVIDENCE: How are arrangements for this area implemented and applied in day-to-day operations? Consider these questions: Can you show that your facilities and equipment are adequate for training delivery? What evidence exists that administrative systems support effective operations? Looking at your staffing, do you have adequate numbers and expertise for current delivery? Can you demonstrate that quality management processes are actually operating? What do document version controls and review dates show about system currency? Are there examples where resource planning enabled successful delivery? How do staff actually use your systems and procedures in daily operations? What records exist of management meetings where quality and self-assurance are discussed? Can you show how information flows from operational level to management and back to staff? Q4 - MONITORING: How does the organisation monitor whether arrangements for this area are effective? Consider these questions: Classroom and Delivery Quality: Who actually observes training sessions and how often does this happen? What do classroom observation reports tell you about training quality? How do you ensure observations are objective and lead to improvement, not just compliance checking? What happens when observations identify quality concerns in delivery? Do observation findings lead to trainer development or support? Staff Support and Development: How do you mentor and support trainers to improve their practice? What evidence shows that staff support and mentoring contributes to better outcomes? How do you identify trainers who need additional support or professional development? What's the link between staff development activities and quality improvement? Information Flow and Communication: How does self-assurance information reach management - what's the pathway and timeframe? What management or leadership meetings occur where quality data is reviewed and decisions made? How often do these meetings happen and what's actually discussed? How is information from management communicated back to operational staff? What staff meetings or forums exist to keep everyone informed and in the loop? How do you ensure staff at all levels understand quality issues and improvement priorities? Can you show the flow of information: operations → management → decisions → communication to staff? Systems and Resources: How do you identify when resources or systems are inadequate for quality delivery? What tells you if administrative systems are helping or hindering operations? How do you know if policies and procedures are being followed or ignored? What data informs decisions about resource allocation and capability? How do you monitor whether your infrastructure is fit for purpose? What mechanisms capture operational issues before they affect training quality? Q5 - IMPROVEMENT: What risks have been identified for this area, and how are they managed? Consider these questions: What could cause systems or resources to become inadequate for delivery needs? How do you prevent resource constraints from compromising training quality? What happens when administrative systems fail or don't meet operational needs? How do you ensure business continuity when key staff or systems are unavailable? What operational risks have you identified and how are they managed? What improvements have you made to systems, resources, or infrastructure? What's your biggest operational challenge in maintaining quality delivery? How do you prevent information silos where important quality data doesn't reach the right people? What happens when staff feel disconnected from quality processes or decision-making? EXAMPLE INFORMATION TO INCLUDE: ✓ Organizational structure showing training delivery responsibilities (chart/diagram with role titles) ✓ Quality management framework or procedures ✓ Evidence of resource planning for training delivery ✓ Examples of administrative systems supporting operations ✓ Staff capacity planning (role numbers and expertise required - no staff names or personal details) ✓ Facilities and equipment suitable for training delivery (descriptions, photos without identifiable people) ✓ Classroom observation reports (de-identified - remove trainer and student names) ✓ Staff mentoring and support records (de-identified - show processes and outcomes, not personal details) ✓ Management meeting minutes (redact any confidential commercial or personal information) ✓ Staff meeting records (de-identified - remove staff names if discussing individual performance) ✓ Evidence of information flow from operations to management to staff (process documents, de-identified examples)
