Executive Summary
The institutional landscape for small and mid-sized organizations (SMEs) is currently navigating a pivotal transition between the era of unbridled algorithmic experimentation and the imminent arrival of rigorous statutory enforcement. As the global economy approaches the second half of the 2020s, the adoption of artificial intelligence (AI) has transitioned from a fringe competitive advantage to a core operational necessity. However, this rapid diffusion has occurred within a significant governance vacuum. The Di Tran University Research Team identifies this period (2025–2026) as the “Pre-Regulatory Window,” a brief historical interval where organizations can establish the foundations of trust and accountability before they are mandated by state and federal regimes.1 This report introduces the Minimal Viable AI Governance (MV-AIG) model, a streamlined institutional framework designed to bridge the gap between technical capability and regulatory readiness.
At the heart of the MV-AIG model is the thesis that a concise, documented AI Use and Governance Policy—when strategically aligned with the “Govern” function of the NIST AI Risk Management Framework (RMF) 1.0—delivers disproportionate institutional value. By formalizing risk ownership and data boundaries early, SMEs can effectively manage the “compliance purgatory” created by the divergence between proactive state laws, such as the Colorado AI Act (SB24-205), and the federal push for a “minimally burdensome” national policy under Executive Order 14365.3 This approach reframes governance not as a restrictive technical control, but as a form of institutional capital—a trust-building asset that signals maturity to regulators, investors, and insurers alike.6
The following analysis details the systemic risks inherent in the current “Governance Gap,” translates the complex NIST AI RMF into actionable operational reality, and defines the four essential pillars of the MV-AIG model: approved tool inventories, data access boundaries, human review mandates, and auditability standards. Through the contextual lens of research-driven and vocational institutions, this report demonstrates how early documentation provides a “rebuttable presumption of reasonable care,” positioning organizations to thrive in an environment where algorithmic accountability is no longer optional but a prerequisite for institutional legitimacy.9

The Governance Gap (2024–2026)
The chronological window spanning 2024 to 2026 is defined by a profound paradox: the unprecedented democratization of powerful large language models (LLMs) and agentic AI systems has occurred alongside a stagnation in organizational oversight. This “Governance Gap” is not merely a failure of administrative diligence but a systemic risk that threatens the stability of the small business ecosystem and the privacy of the constituents it serves.2 Data from 2024 indicated a nascent interest in AI among SMEs, but by 2025, usage rates had surged, with over a third of SMEs (35%) actively integrating these tools into their daily workflows—a 10-point increase from the previous year.12
The Convergence of Adoption and Risk
The acceleration of AI adoption in small firms has effectively closed the “implementation gap” that traditionally separated SMEs from large enterprises. According to the Small Business Administration (SBA) Office of Advocacy, large businesses used AI at 1.8 times the rate of small businesses in early 2024, but by the third quarter of 2025, this disparity had nearly vanished as small business usage reached 8.8% while large firm adoption stabilized.14 However, while the use of AI has equalized, the governance of AI has not. While 96% of small business owners express an intent to adopt emerging AI technologies, only one in five companies possesses a mature model for the governance of autonomous or generative systems.14
This lack of institutional infrastructure creates several second-order effects. First, the phenomenon of “Shadow AI”—the use of unsanctioned AI tools by employees without IT oversight—has become endemic. Approximately 65% of AI tools used within enterprise environments are currently operating outside the view of centralized security or compliance teams.2 The cost of this oversight void is measurable; “Shadow AI” usage is estimated to increase the average cost of a data breach by $670,000, as it becomes nearly impossible to verify compliance with privacy laws or to track where sensitive data has been ingested.2
Comparative Metrics of SME AI Maturity (2024-2026)
The following table synthesizes the growing divide between adoption rates and the implementation of security and governance measures within the SME sector.
| Institutional Metric | 2024 Baseline | 2025 Observed | 2026 Forecast | Primary Source |
| SME Active AI Usage | 25-26% | 35-39% | 58%+ | 12 |
| Generative AI Specific Use | 18% | 26% | 40%+ | 13 |
| Inadequate Digital Security | 72% | 63% (basic) | 45% (target) | 8 |
| Governance Maturity (SMEs) | < 10% | 11-15% | 20% (estimated) | 12 |
| Reported Security Breaches | 16% | 32% | 40%+ (unmitigated) | 13 |
The implications of these statistics are stark. While 91% of AI-using SMBs report revenue increases, and 58% report saving over 20 hours per month through automation, the underlying security foundations are crumbling.14 The OECD research highlights that while businesses are eager for “AI-as-a-Service” models to reduce upfront costs, 72% currently operate with inadequate digital security measures.13 This creates a “fragility of success” where the productivity gains of AI could be wiped out by a single regulatory fine or security incident.
Sectoral Disparities and Systemic Implications
The Governance Gap is not distributed evenly across the economy. B2B service firms—including those in finance, law, and marketing—show significantly higher adoption rates (46%) compared to B2C firms and manufacturers (26%).12 This concentration in knowledge-heavy industries increases the risk of “professional liability” when AI models produce incorrect or “hallucinated” guidance.2 Furthermore, the rise of “Agentic AI”—systems capable of executing autonomous tasks like signing contracts or booking transactions—has introduced a new layer of legal uncertainty. As traditional agency law is tested by autonomous errors, organizations without a clear governance framework are left vulnerable to litigation regarding who bears liability for an AI agent’s disadvantageous or fraudulent actions.2
Ultimately, the absence of robust institutional capacity and ethical guidelines exacerbates the risk of misuse. Scholars warn that without clear protocols, AI systems in the public and private sectors may inadvertently compromise data privacy, reproduce systemic discrimination, or violate civil rights.11 The problem is fundamentally institutional rather than technological; public agencies and small organizations often lack the specialized resources to critically evaluate algorithmic outputs, leading to a reliance on “off-the-shelf” solutions that rarely meet specific security or compliance needs.11
NIST AI RMF 1.0 — Govern Function (Plain-Language Interpretation)
The National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) 1.0 serves as the primary voluntary reference model for managing AI risks across the lifecycle.19 While the framework is comprehensive, its technical density often proves a barrier to adoption for small institutions. To be effective, the core “Govern” function must be translated into an operational reality for non-technical leadership. The Govern function is the “mother function” of the RMF; it is the cross-cutting anchor that enables the other three functions—Map, Measure, and Manage—to operate within a consistent organizational context.19
The Philosophy of a Risk-Aware Culture
In plain language, the Govern function is the process of establishing the “rules of the road” and the “seat of responsibility” for AI.20 It is not a software tool or a technical configuration, but a “culture of risk management” that is cultivated from the top down.23 For an SME, this means that before a single AI tool is deployed, leadership must decide what the organization values (e.g., transparency, fairness, privacy) and how much risk it is willing to tolerate to achieve its objectives.20
NIST defines seven core characteristics of “trustworthy AI” that should guide this governance culture:
- Valid and Reliable: The system performs as intended and produces consistent results over time.20
- Safe: The system does not cause physical or psychological harm or endanger civil liberties.20
- Secure and Resilient: The system can withstand adversarial attacks and maintain its integrity.20
- Accountable and Transparent: Responsibility for outcomes is clear, and the internal logic of the system is disclosed to stakeholders.20
- Explainable and Interpretable: A human can understand why the AI reached a specific conclusion or recommendation.11
- Privacy-Enhanced: The system respects data autonomy and minimizes the exposure of sensitive personal information.20
- Fair: Harmful biases are managed to prevent unlawful discrimination or unjust outcomes.3
Operationalizing Accountability and Policy
For the SME, the Govern function manifests through two primary administrative actions: the establishment of Accountability Structures and the formalization of AI Policies.20 Accountability requires identifying key stakeholders—such as the business owner, the IT lead, or a designated “AI Officer”—and clarifying who owns each part of the risk management process.20 This prevents the “diffusion of responsibility” where errors are blamed on “the machine” rather than the institution that deployed it.11
Policy formalization, on the other hand, involves creating transparent processes that align AI initiatives with the broader mission.20 For instance, a policy might dictate that AI can be used for drafting internal memos but never for final medical diagnosis or credit approval without a secondary human audit.10 It also involves addressing “Third-Party and Supply Chain Risks,” recognizing that most SMEs do not build their own AI but rather “deploy” models provided by major technology firms or open-source repositories.20
The Feedback Loop: Govern, Map, Measure, Manage
While this report focuses on the Govern function as the “Minimal Viable” entry point, its relationship to the other functions is critical for long-term maturity. The following table illustrates how the Govern function provides the “instruction set” for the rest of the RMF lifecycle.
| RMF Function | Governed Action | Operational Manifestation |
| Govern | Establishing the Policy | Writing the “One-Page” AI Use Policy and assigning ownership.20 |
| Map | Establishing the Context | Identifying where AI is used and classifying it by risk level (e.g., “high-risk” HR vs. “low-risk” marketing).26 |
| Measure | Establishing the Metrics | Determining how the organization will check for accuracy, bias, and reliability.26 |
| Manage | Establishing the Response | Creating the “Incident Response Plan” for when the AI hallucinates or leaks data.26 |
By focusing on the Govern function first, an organization creates the “Institutional Memory” required to handle the rapid evolution of the technology. This is particularly relevant as NIST updates its guidance (e.g., the 2025 focus on Generative AI and “Model Provenance”) to address emerging threats like data poisoning and “jailbreaking”.21
Minimal Viable AI Governance (MV-AIG) Model
The Minimal Viable AI Governance (MV-AIG) model is designed as a “lean” interpretation of the NIST AI RMF, specifically tailored for resource-constrained environments. Its goal is to provide 80% of the risk mitigation value through 20% of the administrative effort. The model identifies four mandatory elements that form the basis of a defensible governance posture in a pre-regulatory era.28
1. Approved AI Tools (The Institutional Inventory)
The primary cause of governance failure in SMEs is the absence of a defined boundary between personal experimentation and professional deployment. The MV-AIG model requires the maintenance of a centralized “Approved Tool Inventory”.28 This list should not merely name the software (e.g., “ChatGPT”) but specify the approved use cases and the account settings required.28
A minimalist inventory documents the following for each tool:
- Provider and Version: Ensuring that the organization is using a commercially supported version with enterprise-grade privacy protections rather than a “free” personal account that may use inputs for training.2
- Authorized Purpose: Defining exactly what the tool is for (e.g., “summarizing public regulatory documents”) to prevent scope creep into sensitive areas.26
- Tool Owner: A specific human being within the organization who is responsible for monitoring updates, vendor security alerts, and usage patterns.21
2. Data Access Boundaries (The Prompt Guardrails)
The most immediate risk to an organization is the inadvertent disclosure of sensitive information to a third-party LLM provider. Data access boundaries establish “hard stops” for what can and cannot be “pasted” into an AI prompt.27 These boundaries must be communicated in plain language to every employee.
A “Minimal Viable” policy explicitly prohibits the entry of:
- Personally Identifiable Information (PII): Names, addresses, or identifiers of clients, students, or staff.27
- Proprietary IP and Trade Secrets: Internal source code, future business strategies, or unannounced product details.28
- Privileged Communications: Information subject to attorney-client privilege or HIPAA protections.28
Furthermore, the MV-AIG model advocates for the use of “Anonymity” by design. If a dataset must be analyzed by an AI, employees are directed to use redaction or anonymization techniques before ingestion, ensuring that the model never sees the underlying sensitive entities.27
3. Human Review Responsibility (The Verification Mandate)
The MV-AIG model rejects the concept of autonomous AI decision-making for “consequential” tasks. It mandates that every AI-generated output be treated as a “draft” that requires explicit human authentication.28 This mandate is the primary defense against “hallucinations” and biased outcomes that could lead to legal liability.2
Specific requirements under this pillar include:
- Fact-Checking: Mandatory verification of all names, dates, citations, and mathematical calculations provided by an AI.28
- Tone and Brand Alignment: Ensuring the output reflects the organization’s ethical values and professional standards.28
- Dual-Approval for High-Risk Content: For content that impacts human safety, employment, or legal standing, a “second pair of eyes” (a qualified professional) must review the AI-assisted work before it is finalized.10
4. Record Retention and Auditability (The Compliance Trail)
In a pre-regulatory environment, the goal of record retention is to provide “provable security controls” in the event of an audit or inquiry.1 Small organizations must be able to demonstrate that they did not merely have a policy, but that they followed it.20
The MV-AIG model suggests the following minimalist audit trail:
- Access Logs: Maintaining records of who has access to which AI tools and through what accounts.28
- Incident Log: A simple document where mistakes (e.g., “Employee X accidentally uploaded a client file to ChatGPT”) are recorded along with the corrective action taken.28
- Quarterly Self-Assessments: Using a lean 20-item checklist to evaluate compliance with the AI policy, identifying gaps in training or tool oversight.29
- Retention Period: Aligning with standard business records or emerging AI laws (such as Colorado’s 3-year requirement for impact assessments), organizations should retain governance documentation for at least three years.10
| MV-AIG Element | Operational Constraint | Risk Mitigated | Regulatory Alignment |
| Approved Tools | White-list of vetted platforms. | Shadow AI & Vendor Risk | NIST Govern-2 |
| Data Boundaries | Prohibited data “hard-stops.” | Data Breach & Privacy | NIST Privacy Framework |
| Human Review | AI output must be human-authenticated. | Hallucination & Liability | CO SB24-205 (Reasonable Care) |
| Auditability | Access & Incident logging. | Regulatory Non-Compliance | FTC & State AG Oversight |
Institutional Case Context (Passive, Non-Promotional)
The necessity for a Minimal Viable AI Governance framework is best understood within specific institutional contexts where the balance between human empathy and technological automation is a core operational challenge. The following examples represent settings where such governance is not an “add-on” but a foundational requirement for institutional stability.
Research-Driven Human-Centered Systems: Di Tran University
Di Tran University serves as an academic and research framework for exploring “humanized AI” and “compliance-first governance”.33 Within this institution, the “Triadic Learning Architecture” is the primary research focus, emphasizing three pillars: technological precision, human service, and ethical leadership.33 This framework conceptualizes AI as an “instructional teacher” and an administrative automator designed to enhance efficiency while preserving the “irreplaceable essence of human connection”.33
Research at the university investigates how institutions can navigate the “precarious crossroads” of modern higher education, where regulatory frameworks often lag behind the rapid adoption of automation.35 In this context, the MV-AIG model is viewed as “Institutional Trust Infrastructure.” By automating documentation and formalities with precision, the institution aims to liberate faculty and students to focus on “shared growth” and “authentic connection,” thereby addressing systemic challenges like workplace loneliness and administrative inertia.33 The university’s “College of Humanization” research series specifically examines the 2026 regulatory landscape, advocating for models where profit and humanity are harmonized through documented ethical guardrails.34
Vocational Excellence and Compliance: Louisville Beauty Academy
The Louisville Beauty Academy (LBA) provides a contextual example of a vocational education environment that must adhere to rigorous state licensing and health standards.37 As a state-accredited beauty college, LBA operates under the oversight of the Kentucky Board of Cosmetology (KBC), where “lawful practice” and “safety education” are mandatory prerequisites for professional licensure.37 In this environment, the “Gold-Standard Model” of education requires that every step—from theory to clinical practice—be transparent, documented, and compliant with statutory language.39
Within such a vocational setting, AI tools are integrated to support diverse learners, including those with language barriers, through AI-assisted study support.37 However, this integration must be governed by clear boundaries to ensure that the “human touch” required for services like cosmetology is never compromised by an over-reliance on automated systems.37 LBA’s role as a “Public Knowledge Library” for the beauty industry illustrates the transition to proactive governance; by making regulatory explanations and safety education openly accessible, the institution reduces “misinformation and compliance risk” for its students and the broader licensed professional community.39 This vocational context demonstrates how documented governance serves as a “Freedom Factory,” enabling career acceleration while maintaining the “professional dignity” required by state law.38
Why Early Documentation Matters
The transition from a voluntary to a mandatory AI regulatory environment creates a high-stakes “first-mover” advantage for organizations that document their governance intent early. Between 2025 and 2026, the value of early documentation shifts from mere best practice to a primary legal and financial defense mechanism.1
The Safe Harbor of “Reasonable Care”
The most compelling reason for early documentation is the “Rebuttable Presumption” clause found in emerging state laws. Under the Colorado AI Act (SB24-205), organizations that deploy “high-risk” AI systems are required to use “reasonable care” to protect consumers from algorithmic discrimination.3 In any enforcement action brought by the State Attorney General, an organization is presumed to have met this standard if it can prove it complied with a recognized framework like the NIST AI RMF or ISO 42001.9
Early documentation—even a simple one-page policy and a tool inventory—provides the physical evidence needed to trigger this presumption. Conversely, organizations that wait for enforcement to begin before documenting their processes are viewed as having “haphazard efforts” or “no visible program,” making them the primary targets for regulatory scrutiny and litigation.6
Negotiating the “Managed Conflict” of Federal and State Law
The 2026 regulatory outlook is characterized by a “managed conflict” between state autonomy and federal preemption.4 On one hand, states like Colorado, California, and Texas are enacting comprehensive statutes that impose affirmative risk management obligations.3 On the other hand, Executive Order 14365 seeks to establish a “minimally burdensome” national policy framework that aims to preempt “onerous” state laws that might “thwart” U.S. AI dominance.5
In this fragmented landscape, the MV-AIG model serves as a “universal adapter.” Because it is built on the NIST AI RMF—a framework that both state and federal actors recognize—documented compliance with MV-AIG protects the organization regardless of which legislative path ultimately prevails. This “strategic independence” is particularly valuable for SMEs that lack the budget to maintain 50 different compliance regimes.4
Insurance Underwriting and Risk Transfer
The insurance market in 2026 has begun to mirror the cybersecurity landscape of the early 2010s. Carriers are increasingly requiring “documented evidence of adversarial red-teaming, model-level risk assessments, and specialized safeguards” as a prerequisite for underwriting AI-specific coverage.1 Organizations that can produce an MV-AIG audit trail are positioned to secure better terms and lower premiums. In contrast, those without documentation are increasingly being forced to accept “exclusion riders” that leave them fully exposed to the financial consequences of AI failure, which can include massive GDPR-style fines (up to 7% of global turnover) or multi-million dollar penalties for consumer protection violations.2
Implications for Regulators, Educators, and Investors
The adoption of a minimalist, pre-regulatory governance framework by the SME sector provides critical signals to the stakeholders who influence institutional growth and public policy.
For Regulators: Governance as a Triage Mechanism
State Attorneys General and federal agencies like the FTC increasingly use the presence of governance documentation as a “triage mechanism” for enforcement.1 In a world of “Shadow AI” and systemic bias, regulators cannot investigate every firm. They will prioritize those that show a total lack of oversight. Proactive AI governance signals “institutional maturity” and “risk awareness,” suggesting that if a problem occurs, the organization has the mechanisms in place to detect, report, and redress it.6 This shifts the regulatory dynamic from one of “punitive intervention” to “collaborative oversight.”
For Educators: The New Vocational Literacy
As AI becomes “woven into the fabric of our lives,” educators must redefine vocational literacy to include “algorithmic ethics and oversight”.6 It is no longer sufficient to teach the technical application of a tool; students must be trained in the “human oversight mechanisms” and “interpretability” required to use AI safely in professional settings.7 This includes teaching students how to identify bias in training data and how to navigate the “redefinition of institutional ethics” that occurs when automation mediates authority.11 Case contexts like Di Tran University and Louisville Beauty Academy emphasize that “AI skilling” must be paired with “human-centered design” to prevent the marginalization of human expertise.11
For Investors: Governance as “Alpha” and Risk Mitigation
For investors and venture capitalists, AI governance has evolved into a key indicator of “long-term institutional capital”.7 A startup or SME that has implemented MV-AIG is inherently more “auditable” and less likely to face business-threatening fines during the 2026 enforcement wave.8 Proactive governance is framed as a “strategic asset” that prevents “egg on face” moments—such as a chatbot making unauthorized promises or leaking proprietary data—which can erase brand value overnight.6 Investors are now looking for “accountability” that connects risks to actual mitigation practices, viewing this as a sign that a firm’s growth is sustainable and built on “trustworthy AI”.6
Conclusion: Governance as Institutional Capital
The Minimal Viable AI Governance (MV-AIG) model represents a fundamental shift in how small and mid-sized organizations perceive their relationship with technology. Rather than viewing regulation as an external threat to be resisted, this framework frames governance as a form of Institutional Capital—a permanent, trust-building asset that enhances the value of the organization and its service to the community.6
As the pre-regulatory window of 2025–2026 closes, the “Governance Gap” will inevitably be filled—either by proactive institutional design or by reactive enforcement.1 The MV-AIG model, by focusing on the core “Govern” function of the NIST AI RMF 1.0, offers a path toward the latter. By establishing risk ownership, defining data boundaries, mandating human review, and ensuring auditability, SMEs can move from being “victims of technological disruption” to “architects of trustworthy innovation”.26
Ultimately, the documentation of an AI policy is not a technical formality; it is an act of institutional leadership. It signals to all stakeholders—regulators, educators, employees, and the public—that the organization recognizes its power and accepts its responsibility.20 In an age where algorithmic systems increasingly mediate human opportunity, this commitment to governance is the most valuable capital an institution can possess.7 The arrival of June 30, 2026, and the subsequent federal standards will not be a moment of crisis for organizations that have adopted MV-AIG, but a moment of validation for their foresight and their commitment to a human-centered, compliance-first future.3
Works cited
- 2026 Year in Preview: AI Regulatory Developments for Companies to Watch Out For, accessed February 9, 2026, https://www.wsgrdataadvisor.com/2026/01/2026-year-in-preview-ai-regulatory-developments-for-companies-to-watch-out-for/
- AI Risk & Compliance 2026: Enterprise Governance Overview – Secure Privacy, accessed February 9, 2026, https://secureprivacy.ai/blog/ai-risk-compliance-2026
- Complying With Colorado’s AI Law: Your SB24-205 Compliance Guide | TrustArc, accessed February 9, 2026, https://trustarc.com/resource/colorado-ai-law-sb24-205-compliance-guide/
- 2026 AI Policy and Semiconductor Outlook: How Federal Preemption, State AI Laws, and Chip Export Controls Will Shape U.S. Policy | Mintz – ML Strategies – JD Supra, accessed February 9, 2026, https://www.jdsupra.com/legalnews/2026-ai-policy-and-semiconductor-1640306/
- State AI laws under federal scrutiny: Key takeaways from the executive order establishing federal AI policy framework | White & Case LLP, accessed February 9, 2026, https://www.whitecase.com/insight-alert/state-ai-laws-under-federal-scrutiny-key-takeaways-executive-order-establishing
- The Business Case for Proactive AI Governance – Wharton, accessed February 9, 2026, https://executiveeducation.wharton.upenn.edu/thought-leadership/wharton-at-work/2025/03/business-case-for-ai-governance/
- Governing AI with trust: an adaptive framework for institutional legitimacy in the UK public sector – Emerald Publishing, accessed February 9, 2026, https://www.emerald.com/tg/article/doi/10.1108/TG-05-2025-0125/1329868/Governing-AI-with-trust-an-adaptive-framework-for
- AI in 2026: How to Build Trustworthy, Governed & Safe AI Systems | Keyrus, accessed February 9, 2026, https://keyrus.com/us/en/insights/ai-in-2026-how-to-build-trustworthy-safe-and-governed-ai-systems-noram
- Senate Bill 24-205 – Colorado General Assembly, accessed February 9, 2026, https://leg.colorado.gov/bill_files/47770/download
- Colorado AI Act (SB 205): Complete Compliance Guide 2026, accessed February 9, 2026, https://almcorp.com/blog/colorado-ai-act-sb-205-compliance-guide/
- 37-48 37 – ARTIFICIAL INTELLIGENCE IN PUBLIC …, accessed February 9, 2026, https://ejournal.goacademica.com/index.php/jv/article/download/1420/872/
- Turning Point As More SMEs Unlock AI – British Chambers of Commerce, accessed February 9, 2026, https://www.britishchambers.org.uk/news/2025/09/turning-point-as-more-smes-unlock-ai/
- SME AI Adoption in 2025: Key Insights from OECD Research That Could Transform Your Business – Daijobu AI, accessed February 9, 2026, https://daijobu.ai/2025/05/14/sme-ai-adoption-in-2025-key-insights-from-oecd-research-that-could-transform-your-business/
- Small Business AI Adoption Statistics for 2025: A Comprehensive Analysis, accessed February 9, 2026, https://usmsystems.com/small-business-ai-adoption-statistics/
- The State of AI in the Enterprise – 2026 AI report | Deloitte US, accessed February 9, 2026, https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-ai-in-the-enterprise.html
- Governance of Generative AI | Policy and Society – Oxford Academic, accessed February 9, 2026, https://academic.oup.com/policyandsociety/article/44/1/1/7997395
- 2026 AI Legal Forecast: From Innovation to Compliance | Baker Donelson, accessed February 9, 2026, https://www.bakerdonelson.com/2026-ai-legal-forecast-from-innovation-to-compliance
- View of Governing AI Proactively: Cooperative Models of Anticipation and Accountability, accessed February 9, 2026, https://ojs.aaai.org/index.php/AIES/article/view/36783/38921
- AI RMF Core – AIRC, accessed February 9, 2026, https://airc.nist.gov/airmf-resources/airmf/5-sec-core/
- Everything You Need to Know About NIST AI RMF – Carbide Security, accessed February 9, 2026, https://carbidesecure.com/resources/everything-you-need-to-know-about-nist-ai-rmf/
- NIST AI Risk Management Framework: A simple guide to smarter AI governance – Diligent, accessed February 9, 2026, https://www.diligent.com/resources/blog/nist-ai-risk-management-framework
- NIST.AI.100-1 AI Risk Management Framework – Cyber Security, Compliance & Privacy Solutions | JANUS Associates | USA, accessed February 9, 2026, https://janusassociates.com/resources/nist-ai-risk-management-framework/
- Navigating the NIST AI Risk Management Framework – Hyperproof, accessed February 9, 2026, https://hyperproof.io/navigating-the-nist-ai-risk-management-framework/
- NIST AI RMF 2025 Updates: What You Need to Know About the Latest Framework Changes, accessed February 9, 2026, https://www.ispartnersllc.com/blog/nist-ai-rmf-2025-updates-what-you-need-to-know-about-the-latest-framework-changes/
- Executive Order 14110—Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence | The American Presidency Project, accessed February 9, 2026, https://www.presidency.ucsb.edu/documents/executive-order-14110-safe-secure-and-trustworthy-development-and-use-artificial
- Safeguard the Future of AI: The Core Functions of the NIST AI RMF – AuditBoard, accessed February 9, 2026, https://auditboard.com/blog/nist-ai-rmf
- AI Policy Template – NTEN, accessed February 9, 2026, https://word.nten.org/wp-content/uploads/2024/08/AI-Policy-Template-by-ANB-Advisory.pdf
- The AI Acceptable Use Policy Template for Small Teams – Zevonix, accessed February 9, 2026, https://zevonix.com/the-ai-acceptable-use-policy-template-for-small-teams/
- (PDF) Operationalizing The NIST AI RMF For Smes -Top National Priority (AI Safety) And Perfect For Your Data/IT Toolkit; Produce A Lean Control Catalog, Audit Checklist, And Incident Drill For Real LLM Workflows – ResearchGate, accessed February 9, 2026, https://www.researchgate.net/publication/396371944_Operationalizing_The_NIST_AI_RMF_For_Smes_-Top_National_Priority_AI_Safety_And_Perfect_For_Your_DataIT_Toolkit_Produce_A_Lean_Control_Catalog_Audit_Checklist_And_Incident_Drill_For_Real_LLM_Workflows
- AI Policy Template: Why every business needs one and how to get yours today – Thoropass, accessed February 9, 2026, https://www.thoropass.com/blog/ai-governance-policy-template
- AI Policy Pack US | Professional AI Governance Templates for Small Business, accessed February 9, 2026, https://aipolicypack.com/
- 2026 Outlook: Artificial Intelligence | Insights – Greenberg Traurig, LLP, accessed February 9, 2026, https://www.gtlaw.com/en/insights/2025/12/2026-outlook-artificial-intelligence
- Di Tran University, accessed February 9, 2026, https://ditranuniversity.com/
- About the Founder: Di Tran – A Journey from Humble Beginnings to Visionary Leadership, accessed February 9, 2026, https://ditranuniversity.com/founderditran/
- Di Tran research series Archives, accessed February 9, 2026, https://ditranuniversity.com/tag/di-tran-research-series/
- Di Tran — Founder & CEO | Visionary Leader in Workforce Education, Humanized AI, and Immigrant Entrepreneurship – New American Business Association (NABA) – Louisville, KY, accessed February 9, 2026, https://naba4u.org/di-tran-founder-ceo-visionary-leader-in-workforce-education-humanized-ai-and-immigrant-entrepreneurship/
- Louisville Beauty Academy — Aesthetic/Esthetic 750 Clock Hours Curriculum, accessed February 9, 2026, https://louisvillebeautyacademy.net/louisville-beauty-academy-mastering-aesthetics-with-a-comprehensive-curriculum/
- About Us – Louisville Beauty Academy, accessed February 9, 2026, https://louisvillebeautyacademy.net/about/
- Louisville Beauty Academy: Our Direction Forward (2026 and Beyond), accessed February 9, 2026, https://louisvillebeautyacademy.net/louisville-beauty-academy-our-direction-forward-2026-and-beyond/
- From Your First License to Your Own Salon — LBA’s Step-by-Step Path with Apprenticeship and Ownership Opportunities – Louisville Beauty Academy, accessed February 9, 2026, https://louisvillebeautyacademy.net/from-your-first-license-to-your-own-salon-lbas-step-by-step-path-with-apprenticeship-and-ownership-opportunities/
- 2026 AI Laws Update: Key Regulations and Practical Guidance – Gunderson Dettmer, accessed February 9, 2026, https://www.gunder.com/en/news-insights/insights/2026-ai-laws-update-key-regulations-and-practical-guidance
- What EO 14365 Means for State AI Laws and Business Compliance – Fraser Stryker, accessed February 9, 2026, https://www.fraserstryker.com/what-eo-14365-means-for-state-ai-laws-and-business-compliance/
- Latest AI Regulations Update: What Enterprises Need to Know in 2026 – Credo AI, accessed February 9, 2026, https://www.credo.ai/blog/latest-ai-regulations-update-what-enterprises-need-to-know
- The Annual AI Governance Report 2025: Steering the Future of AI – ITU, accessed February 9, 2026, https://www.itu.int/epublications/publication/the-annual-ai-governance-report-2025-steering-the-future-of-ai
- 1 year later, how has the White House AI Executive Order delivered on its promises?, accessed February 9, 2026, https://www.brookings.edu/articles/one-year-later-how-has-the-white-house-ai-executive-order-delivered-on-its-promises/