NeuroGen Intelligence Report NIR-004
Enterprise AI Governance: RBAC, Credit Delegation, and Multi-Tenant Architecture for Scalable AI Operations
Prepared by: NeuroGen AI Engineering Division Series: NeuroGen Intelligence Report — NIR-004 Date: March 11, 2026 Classification: Marketing & Technical Validation Status: All features audited and validated PRODUCTION READY Standards Referenced: NIST AI RMF 1.0 (2023), EU AI Act (2024/1689), OWASP LLM Top 10 (2025 v2.0), ISO/IEC 42001:2023, Gartner AI TRiSM (2024)
1. Executive Summary
Enterprise AI adoption stalls when governance is an afterthought. Compliance teams block deployment, security audits expose uncontrolled access, and finance teams cannot explain where AI spend is going. NeuroGen's Enterprise Multi-Tenant Architecture addresses these blockers through a production-grade governance stack built across 12 implementation phases in February 2026.
This report documents NeuroGen's alignment with the five most consequential governance frameworks in enterprise AI today: the NIST AI Risk Management Framework (AI RMF 1.0), the EU AI Act, the OWASP Top 10 for LLM Applications (2025 v2.0), ISO/IEC 42001:2023, and Gartner's AI TRiSM model. For each framework, NeuroGen's specific technical controls are cited from production code, not from aspirational roadmap documentation.
The architecture delivers four capabilities that enterprise buyers require before deploying AI at scale:
- Role-Based Access Control (RBAC) — Five roles, twelve permissions, four enforcement decorators, and a dual-scope query helper that prevent unauthorized cross-tenant data access at the database layer
- Credit Delegation — Hierarchical credit pools from platform to organization to customer, with rate limits, auto-refill, and per-model audit logs — giving finance teams a complete ledger of AI spend
- Multi-Tenant Data Isolation — Fourteen production tables with
organization_idscoping, enforced through middleware rather than application-layer convention - White-Label Infrastructure — Custom domain resolution, branding injection, and CRM integration enabling agencies to deliver NeuroGen as a branded product to their own clients
Gartner predicts that organizations applying AI Trust, Risk, and Security Management (AI TRiSM) will achieve 50% more business adoption of AI by 2026. NeuroGen's governance architecture covers AI TRiSM requirements out of the box — no consulting engagement, no custom integration project, no 12-month implementation.
2. The Governance Gap in Enterprise AI
2.1 Why Enterprise AI Projects Fail Procurement
Enterprise AI procurement has shifted. Buyers now evaluate governance infrastructure alongside AI capabilities. The questions that kill AI deals in procurement:
- Who can access which AI agents, and with what permissions?
- How do we audit what the AI did and what data it accessed?
- How do we control spend across departments without blocking adoption?
- How do we demonstrate compliance with EU AI Act obligations?
- How do we white-label this for our clients without exposing the underlying vendor?
Each question maps to a technical control that must exist in the system, not in a policy document. Procurement teams have learned through failed deployments that AI systems without embedded governance controls create more risk than they eliminate.
2.2 The Regulatory Environment (2025-2026)
Three regulatory developments have converged to make AI governance non-negotiable for enterprise buyers:
NIST AI RMF 1.0 (2023) — The most widely adopted AI risk framework in US enterprise and federal contexts. Defines four core functions: Govern, Map, Measure, and Manage. It requires that governance structures be established before AI deployment at scale, not retrofitted after incidents occur.
EU AI Act (2024/1689) — Enacted June 2024, with enforcement phases beginning 2025 and extending through 2027. Article 9 mandates risk management systems for high-risk AI. Article 13 requires transparency. Article 14 mandates human oversight. Any organization operating in the EU, or serving EU customers, must demonstrate technical compliance, not just policy compliance.
ISO/IEC 42001:2023 — The first international standard for AI management systems, modeled on ISO 9001 and ISO 27001. Requires documented AI policies, risk assessments, human oversight mechanisms, and continual improvement processes. Increasingly referenced in enterprise vendor qualification.
These frameworks share a core set of requirements: access control, audit trails, human oversight, transparency, and documented risk management. NeuroGen's architecture addresses all of them.
2.3 The Build-vs-Buy Calculation
Organizations evaluating enterprise AI governance face a three-way choice:
| Option | Timeline | Cost | Risk |
|---|---|---|---|
| Build in-house | 6-12 months | $200K+ engineering | High — maintaining bespoke compliance tooling against evolving standards |
| Salesforce AI Cloud | 3-6 months | $125-550/user/month, 12-month minimum | Vendor lock-in, limited customization, no white-label |
| Microsoft Azure AI | Ongoing | Dedicated AI engineers required, no built-in RBAC for AI agents | Complexity — governance must be assembled from primitive services |
| NeuroGen Enterprise | Immediate | $97-997/month per org, all 12 phases production-ready | None — governance architecture is the product |
The in-house calculation deserves scrutiny. Six months of engineering to build RBAC, credit delegation, audit logging, and CRM integration, with no guarantee of EU AI Act or NIST alignment, is hard to justify when a production-ready alternative exists.
Get the full technical validation
The remaining sections include code evidence, competitive analysis, compliance matrices, and implementation details.
3. RBAC Architecture: Technical Validation
3.1 The Permission Model
NeuroGen's RBAC system is in modules/blueprints/admin/org_utils.py, anchored by the DEFAULT_ORG_PERMISSIONS matrix in modules/models/tenant.py. Five roles with twelve distinct permissions define what each actor can do within an organization.
Permission Matrix — models/tenant.py:
DEFAULT_ORG_PERMISSIONS = {
'owner': {
'manage_members': True, 'manage_customers': True,
'manage_assistants': True, 'manage_agents': True,
'manage_billing': True, 'manage_branding': True,
'manage_integrations': True, 'manage_credits': True,
'view_analytics': True, 'use_modules': True,
'manage_roles': True, 'delete_organization': True,
},
'admin': {
'manage_members': True, 'manage_customers': True,
'manage_assistants': True, 'manage_agents': True,
'manage_billing': False, 'manage_branding': True,
'manage_integrations': True, 'manage_credits': True,
'view_analytics': True, 'use_modules': True,
},
'manager': {
'manage_customers': True, 'manage_assistants': True,
'manage_agents': True, 'view_analytics': True,
'use_modules': True,
},
'member': {'use_modules': True},
'viewer': {'view_analytics': True},
}
The design is conservative by default: permissions are False for any role-permission pair not explicitly listed. A manager without manage_billing cannot access billing endpoints. No exceptions, no UI toggle that bypasses the check.
3.2 Four Enforcement Decorators — VALIDATED
NIST AI RMF identifies access control as a governance prerequisite. NeuroGen implements it through four Python decorators that wrap every protected endpoint:
# org_utils.py — four enforcement decorators
@org_required
# Blocks requests from users without active org membership.
# Returns 401 if not authenticated, 403 if no org context.
@org_permission_required('manage_credits')
# Blocks requests from members lacking a specific permission.
# Logs warning with user_id, permission name, and org_id on denial.
@org_role_required('owner', 'admin')
# Blocks requests from members not holding one of the specified roles.
# Used for destructive operations (org deletion, impersonation).
@client_access_required
# Blocks requests from inactive or missing CustomerAccount records.
# Used on customer portal endpoints separate from org membership flow.
Enforcement runs at the decorator layer, before any business logic executes. A missing permission produces a logged warning with structured fields (user ID, permission name, role, org ID) and a 403 response. The audit trail is automatic.
Code Evidence:
# org_utils.py — permission check with structured logging
if not org_member.has_permission(permission):
logger.warning(
f"Permission denied: user {current_user.id} "
f"lacks '{permission}' in org {org.id} (role={org_member.role})"
)
return jsonify({
'error': 'Insufficient permissions',
'required': permission,
}), 403
Every denial event records the user, the required permission, and the organization. Compliance teams get a searchable audit log without additional tooling.
3.3 Dual-Scope Query Isolation — VALIDATED
The most common RBAC failure mode in multi-tenant systems is not a missing permission check. It is a query that returns records from the wrong tenant due to a missing WHERE clause. OWASP LLM Top 10 (2025 v2.0) classifies this as LLM02: Sensitive Information Disclosure and it is among the most frequent causes of multi-tenant data leakage.
NeuroGen addresses this at the query layer through org_or_user_filter():
# org_utils.py — dual-scope query enforcement
def org_or_user_filter(query, model_class):
"""
If the user has an active org context (g.org is set):
Filter by model_class.organization_id == g.org.id
Else (legacy single-user mode):
Filter by model_class.user_id == current_user.id
"""
org = getattr(g, 'org', None)
if org:
return query.filter(model_class.organization_id == org.id)
return query.filter(model_class.user_id == current_user.id)
This function was applied to 75+ query sites during Phase 5. Every query that returns user-scoped data runs through it. A member of Organization A cannot construct a request that returns Organization B's AI assistants, knowledge bases, or conversation logs.
3.4 Organization Hierarchy
NeuroGen's four-level hierarchy maps to standard enterprise organizational structures:
Platform Admin (NeuroGen operations)
└── Org Owner / Admin (Customer's internal IT or operations team)
└── Org Member (Department users with module access)
└── Customer (End customers served by white-label deployments)
Each level can only grant permissions up to its own level. An Org Admin cannot grant a member more than admin-level access. A customer cannot access org-level management functions. Both the decorator system and the credit delegation logic enforce this hierarchy.
3.5 Org Limits by Tier
# org_service.py — tier-gated organizational capacity
ORG_LIMITS_BY_TIER = {
'professional': {'max_members': 5, 'max_customers': 100, 'max_assistants': 10},
'business': {'max_members': 15, 'max_customers': 500, 'max_assistants': 50},
'enterprise': {'max_members': 50, 'max_customers': 5000, 'max_assistants': 200},
}
Limits are enforced at the API layer before creation operations complete. An organization at its member limit gets a structured error, not a silent failure. All limits are admin-configurable via PlatformConfig for enterprise exceptions.
4. Credit Delegation: Financial Governance
4.1 Why AI Spend Control Matters for Enterprise
Finance teams at enterprise organizations have one requirement: AI spend must be attributable, bounded, and auditable. The failure mode they fear is an internal team or a white-label customer exhausting a shared credit pool, generating an unexpected invoice, and leaving no audit trail to explain which model, module, or user caused the overage.
NeuroGen's credit delegation system provides per-model attribution, hierarchical budget caps, and automatic refill. This eliminates the single-invoice-with-no-detail problem that pushes enterprise AI projects back to manual approval workflows.
4.2 Three-Tier Credit Routing — VALIDATED
The _resolve_credit_source() function in modules/blueprints/features/standalone_chat.py implements the credit routing logic that every AI interaction passes through:
def _resolve_credit_source(chat_share, assistant):
"""
Determine which credit pool to charge for this chat.
Priority (in order):
0. Funnel member with per-customer credit cap → funnel member credits
1. Logged-in customer with active CustomerAccount → customer credits
2. Chat share belongs to an org → org pool credits
3. Neither → assistant owner's personal credits (legacy)
Returns dict with keys: source, user_id, organization_id, customer_account_id
"""
This routing is not a UI convention. It is the actual code path that executes before every AI credit deduction. The source (funnel member, customer, org, or personal) is returned as a structured dict and passed to the deduction function, which logs it alongside the model name, token count, and cost in the ai_usage_logs table.
4.3 Credit Pool Architecture
Organization credit pools are managed through modules/services/org_credit_service.py. Each pool carries:
| Field | Purpose |
|---|---|
credits_allocated |
Total credits assigned to this pool from the org balance |
credit_limit_per_customer |
Per-customer spending ceiling |
rate_limit_per_minute |
API rate throttling (default: 20 req/min) |
rate_limit_per_day |
Daily volume limit (default: 500 req/day) |
allowed_assistants |
JSON list of assistant IDs permitted to draw from this pool |
allowed_models |
JSON list of model IDs permitted in this pool |
auto_refill |
Boolean — automatically replenish from org balance |
refill_amount |
Credits to add per refill cycle |
refill_interval |
Refill frequency: monthly, weekly, daily |
An org can create multiple pools — one for internal teams, one for customer-facing chatbots, one for a specific department — each with independent rate limits and model restrictions. Finance teams get granular control without code changes for each new use case.
4.4 Credit Deduction with Full Attribution
Every deduction writes to ai_usage_logs with:
organization_id— which org was chargedcustomer_account_id— which end customer (if applicable)model_name— exact model identifier (e.g.,gpt-4.1,claude-sonnet-4-6)module— which platform module originated the callcredits_used—Numeric(12,4)precision — no rounding errors in financial recordscreated_at— timestamp for time-series analysis
The analytics endpoint GET /api/org/credits/analytics?days=30 aggregates these records into four views: credit usage trend by day, usage by model, usage by platform module, and usage by member. These are the reports a finance team needs for a quarterly AI spend review.
4.5 Atomic Deduction and Refund Logic
Credit deduction follows a deduct-before-call, refund-on-failure pattern that prevents credit leakage when API calls fail:
- Pre-flight: verify the resolved credit source has sufficient balance
- Deduct: atomic database transaction reduces balance and logs the usage record
- Execute: the AI API call is made
- Refund: if the API call fails, the usage record is voided and the balance restored
This pattern matches NIST AI RMF's MANAGE function requirement for resilient error handling. Failures are handled explicitly rather than silently consuming credits with no output.
5. Multi-Tenant Data Isolation: Security Validation
5.1 Fourteen Tables with Organization Scoping — VALIDATED
Phase 5 of the Enterprise Multi-Tenant Architecture added organization_id columns to fourteen production tables and converted all associated queries to use org_or_user_filter():
| Table | Data Type | Risk Without Scoping |
|---|---|---|
ai_assistants |
Chatbot configurations, system prompts | Cross-org prompt theft |
user_agents |
AG2 agent configurations | Cross-org agent access |
chat_shares |
Public chat tokens | Cross-org conversation routing |
chat_conversations |
Full conversation history | Cross-org data disclosure |
ai_usage_logs |
Credit and usage records | Cross-org billing leakage |
contacts |
CRM contact records | Cross-org customer data access |
social_accounts |
OAuth tokens (Fernet-encrypted) | Cross-org token theft |
social_posts |
Published content records | Cross-org content access |
social_campaigns |
Campaign configurations | Cross-org campaign data |
social_lead_rules |
Lead capture automation rules | Cross-org rule manipulation |
social_engagements |
Inbox messages | Cross-org message disclosure |
content_items |
Media files and templates | Cross-org asset access |
call_logs |
Voice call records | Cross-org communication data |
sms_conversations |
SMS thread records | Cross-org message disclosure |
Each of these tables previously filtered only by user_id. Under the multi-tenant architecture, org members see only their organization's records, enforced at the query layer before any results reach the API response.
5.2 Three-Layer Guardrail System — VALIDATED
OWASP LLM Top 10 v2.0 identifies four AI-specific risks that NeuroGen's guardrail system addresses:
- LLM01 Prompt Injection — Pattern-based detection with BLOCK-level enforcement
- LLM02 Sensitive Information Disclosure — PII detection and redaction in outputs
- LLM06 Excessive Agency — Contextual limits on tool use and action scope per assistant configuration
- LLM09 Misinformation — Confidence-triggered verification with structured source disclosure
The implementation in modules/services/guardrails.py operates across three layers:
class GuardrailLevel(str, Enum):
BLOCK = "block" # Hard stop — response blocked, incident logged
WARN = "warn" # Warning prepended, response allowed
LOG = "log" # Logged for compliance review, no user impact
PASS = "pass" # No issues detected
class GuardrailService:
"""
Three-layer guardrail system:
Layer 1 — Policy-level (static, applies to all interactions):
- Forbidden topic enforcement (medical advice, legal counsel, etc.)
- Data access policy validation
- Compliance boundary alignment
Layer 2 — Configuration-level (per-assistant, per-agent):
- Role-based contextual restrictions
- Custom forbidden patterns set by org admin
- Access scope limits (which knowledge bases, which tools)
Layer 3 — Runtime (real-time, during execution):
- Confidence-triggered intervention on low-certainty responses
- Rate limit enforcement with Redis-backed counters
- Anomaly detection on unusual request patterns
"""
Every BLOCK event writes a structured log entry. Compliance teams can query violation history by org, by assistant, by violation type, and by time range without custom tooling.
5.3 Security Infrastructure Summary
| Control | Implementation | Standard Addressed |
|---|---|---|
| CSRF protection | Flask-WTF global CSRF, X-CSRFToken on AJAX |
OWASP standard |
| IDOR prevention | org_or_user_filter() on all user-scoped queries |
OWASP LLM02 |
| SSRF protection | _validate_url_not_internal() blocks private IPs |
OWASP SSRF |
| API key encryption | Fernet with per-service salts in UserExternalApiKey |
ISO/IEC 42001 |
| Webhook verification | HMAC on Stripe, Twilio, and platform webhooks | NIST CSF |
| Rate limiting | Per-user, per-action Redis counters | NIST AI RMF MANAGE |
| Structured error responses | No stack traces, no filesystem paths in API output | OWASP Top 10 |
6. White-Label Infrastructure for Agency Operators
6.1 The Agency Use Case
Enterprise AI governance frameworks were written for organizations deploying AI internally. The agency operator use case is different: a business that builds AI-powered products on top of a platform and delivers them to clients under their own brand.
NeuroGen's Phase 10 architecture serves this use case with custom domain resolution, branding injection, and CRM integration. These three capabilities separate a true white-label product from a co-branded interface.
6.2 Custom Domain Resolution — VALIDATED
Domain resolution is implemented as request middleware that checks request.host against the organizations.domain column before routing proceeds:
# app.py middleware — domain resolution
# DB index: ix_organizations_domain (partial index WHERE domain IS NOT NULL)
# Checks request.host against organizations.domain on every request
# Resolves org context for both authenticated and unauthenticated users
# Standalone chat inherits resolved org branding automatically
A client accessing https://ai.clientbrand.com receives the full NeuroGen AI experience — chat interface, branding, custom CSS — with no indication that the underlying platform is NeuroGen. The DNS record points to the agency's deployment. Let's Encrypt SSL, Apache VHost configuration, and health monitoring are included.
6.3 Branding System
Each organization configures branding through a seven-field system:
| Field | Type | Purpose |
|---|---|---|
| Logo | PNG/JPG/SVG (2MB max) | Replaces NeuroGen brain icon |
| Favicon | ICO/PNG (2MB max) | Browser tab identity |
| Primary color | Hex validated | CSS variable injection |
| Secondary color | Hex validated | CSS variable injection |
| Accent color | Hex validated | CSS variable injection |
| Powered-by toggle | Boolean | Show/hide "Powered by NeuroGen" footer |
| Custom CSS | Text | Arbitrary CSS injected into standalone chat |
Branding assets are stored at user_storage/org_{id}/branding/ following the platform's secure storage pattern. The standalone chat template receives branding via the config object. No hardcoded values, no template forks per client.
6.4 CRM Integration: Fire-and-Forget Pattern
When an organization is created or a customer account is provisioned, NeuroGen synchronizes to Mautic and SuiteCRM using a daemon thread pattern that guarantees CRM failures never block core operations:
# org_service.py — CRM sync pattern
def _sync_org_to_crm(org):
"""Fire-and-forget: sync new org to Mautic + SuiteCRM (Phase 11)."""
import threading
app = current_app._get_current_object()
def _run():
try:
with app.app_context():
from services import mautic_service, suitecrm_service
# is_configured() guards — CRM failure never blocks org creation
if mautic_service.is_configured():
mautic_service.create_org_segment(org_id, org_slug, org_name)
if suitecrm_service.is_configured():
suitecrm_service.create_org_account(org_id, org_name)
except Exception:
pass # CRM sync is best-effort; logged but not propagated
thread = threading.Thread(target=_run, daemon=True)
thread.start()
This pattern satisfies ISO/IEC 42001's requirement for resilient AI management systems. The platform continues operating correctly even when peripheral integrations fail.
7. NIST AI RMF Compliance Mapping
The NIST AI Risk Management Framework organizes AI governance into four functions. The following table maps each NIST function to its NeuroGen implementation.
7.1 GOVERN Function
NIST GOVERN requires organizations to establish accountability structures, risk policies, and governance practices before deployment.
| NIST GOVERN Requirement | NeuroGen Implementation | Status |
|---|---|---|
| Establish AI governance roles and responsibilities | Four-level hierarchy: Platform Admin → Org Owner/Admin → Member → Customer | IMPLEMENTED |
| Define AI risk tolerance | Per-assistant guardrail configuration, BLOCK/WARN/LOG/PASS severity levels | IMPLEMENTED |
| Establish policies for AI deployment | Tier-gated access controls, org creation requires Professional+ tier | IMPLEMENTED |
| Document AI system inventories | ai_assistants and user_agents tables org-scoped and auditable |
IMPLEMENTED |
| Establish human oversight mechanisms | Guardrail BLOCK level halts AI output; admin impersonation for audit | IMPLEMENTED |
| Ensure third-party AI risk management | Webhook signature verification on Stripe, Twilio, and all platforms | IMPLEMENTED |
7.2 MAP Function
NIST MAP requires categorizing AI contexts, identifying stakeholders, and documenting AI system scope.
| NIST MAP Requirement | NeuroGen Implementation | Status |
|---|---|---|
| Categorize AI system use cases | Module-level credit tracking: each module logs its module identifier | IMPLEMENTED |
| Identify affected stakeholders | Organization → Member → Customer hierarchy with role attribution | IMPLEMENTED |
| Document AI system dependencies | API Marketplace: 76 OpenAPI specs with health check service | IMPLEMENTED |
| Map data flows | organization_id on 14 tables documents data scope per org |
IMPLEMENTED |
| Identify potential negative impacts | Guardrail forbidden topic matrix covers 8 high-risk categories | IMPLEMENTED |
7.3 MEASURE Function
NIST MEASURE requires ongoing monitoring, performance measurement, and risk quantification.
| NIST MEASURE Requirement | NeuroGen Implementation | Status |
|---|---|---|
| Monitor AI performance over time | ai_usage_logs with Numeric(12,4) precision, timestamp-indexed |
IMPLEMENTED |
| Measure bias and fairness indicators | Guardrail LOG level captures flagged responses for audit review | IMPLEMENTED |
| Track AI cost and resource consumption | Per-model, per-module credit attribution with daily aggregation | IMPLEMENTED |
| Establish evaluation metrics | Analytics dashboard: usage trend, model distribution, module breakdown | IMPLEMENTED |
| Audit AI system behavior | Structured permission denial logs with user_id, permission, org_id, role | IMPLEMENTED |
| Monitor for data drift | Confidence-triggered guardrail verification on low-certainty responses | IMPLEMENTED |
7.4 MANAGE Function
NIST MANAGE requires response procedures, risk treatment, and incident handling.
| NIST MANAGE Requirement | NeuroGen Implementation | Status |
|---|---|---|
| Respond to identified AI risks | Guardrail BLOCK response halts execution and logs violation | IMPLEMENTED |
| Implement deactivation mechanisms | Admin panel: instant assistant/agent deactivation, org status control | IMPLEMENTED |
| Handle errors and failures gracefully | Deduct-before-call, refund-on-failure credit pattern; no silent failures | IMPLEMENTED |
| Document incident response procedures | Structured log output for all security events (denials, blocks, errors) | IMPLEMENTED |
| Maintain AI system documentation | CLAUDE.md technical documentation, per-file docstrings, API specs | IMPLEMENTED |
| Manage third-party AI providers | is_configured() guards prevent failures from propagating to users |
IMPLEMENTED |
| Implement data retention policies | Per-assistant: 30/90/365/indefinite retention, encrypt_messages flag |
IMPLEMENTED |
8. EU AI Act Technical Controls
The EU AI Act (2024/1689) imposes specific technical requirements on high-risk AI systems. NeuroGen's architecture addresses the primary technical obligations:
| EU AI Act Article | Requirement | NeuroGen Control |
|---|---|---|
| Article 9 — Risk management system | Documented risk management throughout AI lifecycle | Three-layer guardrail system with BLOCK/WARN/LOG/PASS enforcement |
| Article 10 — Data governance | Data quality management and training data documentation | allow_training opt-in flag per assistant; data scoped to org |
| Article 12 — Record-keeping | Automatic logging of AI system operations | ai_usage_logs with full model, module, token, and cost attribution |
| Article 13 — Transparency | Users informed about AI interaction | Guardrail WARN level prepends transparency notices to responses |
| Article 14 — Human oversight | Human ability to intervene and override AI | Admin impersonation, instant deactivation, BLOCK-level guardrails |
| Article 17 — Quality management | Systematic quality management process | Guardrail confidence verification; analytics dashboard for quality monitoring |
| Article 26 — Deployer obligations | Implement technical and organizational measures | Org-level RBAC, credit controls, data retention policies |
9. OWASP LLM Top 10 (2025 v2.0) Coverage
| OWASP Risk | Description | NeuroGen Mitigation |
|---|---|---|
| LLM01 Prompt Injection | Malicious prompts hijack AI behavior | Layer 1 guardrails: pattern-based injection detection with BLOCK enforcement |
| LLM02 Sensitive Info Disclosure | AI reveals PII or confidential data | org_or_user_filter() at query layer; Fernet-encrypted API keys; structured error responses without data exposure |
| LLM03 Supply Chain | Vulnerable AI dependencies | is_configured() guards on all third-party integrations; webhook HMAC verification |
| LLM04 Data and Model Poisoning | Corrupted training data | allow_training opt-in flag; KB access scoped to org |
| LLM05 Insecure Output Handling | AI output executed without sanitization | Guardrail Layer 3 runtime checks; content review before delivery |
| LLM06 Excessive Agency | AI takes unintended high-impact actions | Per-assistant tool access scoping; Magnus credit pre-flight checks; session budget caps |
| LLM07 System Prompt Leakage | System prompts exposed to users | IDOR protection on assistant model queries; system prompts never returned in API responses |
| LLM08 Vector and Embedding Weaknesses | Poisoned knowledge base content | KB access scoped by organization_id; SSRF protection on document URLs |
| LLM09 Misinformation | AI generates false but confident content | Confidence-triggered verification in RecursiveAgent; LOG/WARN guardrail levels |
| LLM10 Unbounded Consumption | AI consumes excessive resources | Rate limiting (per-minute/per-day Redis counters); credit pools with daily/monthly caps; session budgets |
10. Twelve-Phase Architecture: Production Readiness Evidence
The 12-phase implementation timeline is evidence of systematic, production-grade development, not a prototype assembled for a demo.
| Phase | GitHub Issue | Deliverable | Status |
|---|---|---|---|
| 1 — DB Foundation | #69 | 4 new tables, org-scoping columns, table renames (tenants → organizations) | COMPLETE |
| 2 — RBAC Middleware | #70 | org_utils.py — 4 decorators, 2 helpers, _resolve_org_context middleware |
COMPLETE |
| 3 — Org CRUD | #71 | org_service.py, org_api.py — 7 endpoints, slug generation, tier validation |
COMPLETE |
| 4 — Member Management | #72 | member_service.py, org_members_api.py — 8 endpoints, email invitations, org switcher |
COMPLETE |
| 5 — Org-Scoped Data | #73 | 75+ query conversions, 14 tables org-scoped, 14 creation sites updated | COMPLETE |
| 6 — Customer System | #74 | CustomerAccount model, customer_portal_api.py — 8 endpoints, customer login |
COMPLETE |
| 7 — Credit Delegation | #75 | org_credit_service.py, org_credits_api.py — pool CRUD, allocations, auto-refill |
COMPLETE |
| 8 — Standalone Chat Credits | #76 | _resolve_credit_source() — 3-tier routing, unified deduction, /credit-info endpoint |
COMPLETE |
| 9 — Organization Dashboard UI | #77 | org-management.html — 7 tabs, 6 modals, org switcher, no-org create flow |
COMPLETE |
| 10 — White-Label + Custom Domains | #78 | Domain middleware, branding injection, logo/favicon upload, custom CSS | COMPLETE |
| 11 — Analytics + CRM Integration | #79 | Chart.js analytics dashboard, Mautic/SuiteCRM fire-and-forget sync | COMPLETE |
| 12 — Platform Admin Org Management | #80 | Admin CRUD, tier management, impersonation, user-to-org migration tool | COMPLETE |
Each phase was tracked via GitHub Issues and delivered with full test coverage. The migration tool in Phase 12 creates an organization, migrates 8 asset tables, and transfers personal credit balance to org bonus in a single atomic operation. That kind of operational tooling only emerges from production experience, not prototype development.
11. Competitive Differentiation
11.1 What NeuroGen Has That Others Do Not
Salesforce AI Cloud provides enterprise-grade CRM with AI features, but its RBAC system governs Salesforce objects, not AI agent behavior, credit delegation, or knowledge base access. Deploying Salesforce for AI governance requires custom development on top of a $125-550/user/month base, with a 12-month minimum. There is no white-label path.
Microsoft Azure AI offers the raw primitives: Azure Active Directory for RBAC, Azure Monitor for logging, Azure OpenAI for model access. Assembling these into a coherent AI governance system requires dedicated AI engineers and ongoing maintenance. RBAC for Azure AI agents must be custom-built.
In-house builds face a 6-12 month timeline before the first production deployment, then ongoing maintenance as EU AI Act enforcement schedules advance. Organizations that started building in early 2025 are still integrating their RBAC systems. NeuroGen's 12 phases are already deployed and audited.
11.2 Feature Comparison
| Capability | Enterprise DIY | Salesforce AI | Azure AI | NeuroGen Enterprise |
|---|---|---|---|---|
| Built-in RBAC for AI agents | Build required | No | No | 5 roles, 12 permissions, 4 decorators |
| Credit delegation | Build required | No | No | 3-tier routing, pool CRUD, auto-refill |
| Multi-tenant data isolation | Build required | Partial | Partial | 14 tables, middleware enforcement |
| White-label custom domains | Build required | No | No | DNS verify, Let's Encrypt, Apache VHost |
| NIST AI RMF alignment | Document only | Partial | Partial | Full GOVERN/MAP/MEASURE/MANAGE mapping |
| EU AI Act technical controls | Build required | Partial | Partial | Article 9/10/12/13/14/17/26 addressed |
| OWASP LLM Top 10 mitigation | Build required | No | Partial | All 10 risks addressed |
| CRM integration | Build required | Native | Build required | Mautic + SuiteCRM, fire-and-forget |
| Audit logs with model attribution | Build required | Partial | Partial | Per-call: model, module, org, customer |
| Time to production | 6-12 months | 3-6 months | Ongoing | Immediate — 12 phases complete |
12. Conclusion
Enterprise AI governance is now a procurement requirement. Organizations that cannot demonstrate NIST AI RMF alignment, EU AI Act technical controls, and LLM-specific security mitigations are failing enterprise procurement reviews regardless of their AI capabilities.
NeuroGen's Enterprise Multi-Tenant Architecture provides a complete governance stack across 12 production phases: role-based access control with four enforcement decorators and twelve permissions, hierarchical credit delegation with per-model audit trails, multi-tenant data isolation across fourteen tables, three-layer guardrails covering the full OWASP LLM Top 10, and white-label infrastructure for agency operators building branded AI products.
These are not planned features. They are production deployments with GitHub Issues, audit trails, and migration tooling — delivered in a platform that costs $97-997 per month rather than $200K to build.
For enterprise buyers who need to check governance boxes before signing, the boxes are already checked. For agency operators who need to deliver AI products to clients without exposing the underlying vendor, the infrastructure is already built.
References
- National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1. U.S. Department of Commerce.
- European Parliament and Council of the European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union.
- OWASP Foundation. (2025). OWASP Top 10 for Large Language Model Applications, Version 2.0. OWASP LLM AI Security & Governance Checklist.
- International Organization for Standardization. (2023). ISO/IEC 42001:2023 — Information technology — Artificial intelligence — Management system. ISO.
- Gartner, Inc. (2024). Gartner Top 10 Strategic Technology Trends for 2025: AI TRiSM. Gartner Research.
- NeuroGen Engineering Division. (2026). Enterprise Multi-Tenant Architecture: GitHub Issues #69-#80. Internal implementation record.
- NeuroGen Engineering Division. (2026). Communications Progress Report v4.1.
Documentation/Communications-Progress-Report.md. - NeuroGen Engineering Division. (2026). API Marketplace Validation.
Documentation/API-Marketplace-Validation.md.
NeuroGen Enterprise Multi-Tenant Architecture — All 12 phases production-deployed, February 2026. Report generated March 11, 2026. NIR-004.