Aligned with Singapore’s Model AI Governance Framework (MAIG)
Last Updated: October 30, 2025
Framework Version: Model AI Governance Framework (Second Edition, 2020) & Model AI Governance Framework for Generative AI (2024)
Our Commitment
At ChatBar AI, we believe that responsible AI development is not just a compliance requirement – it’s a competitive advantage and a moral imperative. We have designed our platform from the ground up to align with Singapore’s Model AI Governance Framework, ensuring that our AI systems are explainable, transparent, fair, and human-centric.
“We’re strong believers in AI’s potential, but we also know its limitations. That is why every line of code in our system is designed and reviewed by experienced human programmers who take full responsibility for the code and design decisions.This human approach ensures our AI applications are transparent, compliant, and efficient by design. When you work with us, you’re not just getting AI technology; you’re getting technology built with accountability at its core.We see it as trust by design, because real trust can’t be bolted on later, it has to be built in from day one.” – Brod Justice, Co-Founder, CTO & Lead Full-Stack Developer
Alignment with Singapore’s Model AI Governance Framework
ChatBar AI’s development practices align with the Personal Data Protection Commission (PDPC) Singapore’s Model AI Governance Framework, which establishes two core principles:
Core Principle 1: Decisions made by AI should be EXPLAINABLE, TRANSPARENT & FAIR
We comply through:
- Clear documentation of how our AI systems process conversations
- Transparent data practices with no hidden personal data collection
- Fair outcomes through anonymous data architecture that prevents discriminatory profiling
- Regular audits of AI model outputs for bias and fairness
Core Principle 2: AI systems should be HUMAN-CENTRIC
We comply through:
- Human oversight at every stage of development and deployment
- Experienced human programmers design, review, and take responsibility for all code
- Customer control over AI behavior through configuration and customization
- Human-in-the-loop for critical decisions and escalations
The 11 AI Ethics Principles
ChatBar AI’s platform addresses all 11 internationally recognized AI ethics principles from Singapore’s AI Verify framework:
Principle | How ChatBar AI Addresses It |
1. Transparency | Anonymous data architecture is transparent by design. No hidden data collection. Clear documentation of AI capabilities and limitations. |
2. Explainability | AI responses are generated through documented processes. Customers can understand how the AI arrives at answers. |
3. Repeatability/Reproducibility | Consistent AI behavior through version-controlled models and documented configurations. |
4. Safety | Human oversight prevents harmful outputs. Content filtering and safety guardrails built-in. |
5. Security | Enterprise-grade security with encryption, access controls, and regular security assessments. |
6. Robustness | Regular testing and model tuning to maintain reliability across diverse scenarios. |
7. Fairness | Anonymous data prevents discriminatory profiling. No personal characteristics used in AI decisions. |
8. Data Governance | Privacy-first architecture with no PII collection. Clear data handling policies and GDPR compliance. |
9. Accountability | Human programmers take full responsibility for code and design decisions. Clear escalation paths. |
10. Human Agency and Oversight | Customers maintain control. Human review of AI outputs. Ability to override AI decisions. |
11. Inclusive Growth, Societal and Environmental Well-being | Carbon-neutral infrastructure. Accessible AI for businesses of all sizes. Ethical AI practices. |
From Principles to Practice: Our Implementation
1. Internal Governance Structures and Measures
Clear Roles and Responsibilities:
- CTO & Lead Developer – Overall technical accountability for AI systems
- Development Team – Code review, testing, and quality assurance
- CCO – Compliance oversight and ethical AI governance
- Security Team – Security assessments and incident response
Standard Operating Procedures (SOPs):
Code review process for all AI-related changes
- Regular bias and fairness audits of AI outputs
- Security testing and vulnerability assessments
- Incident response procedures for AI-related issues
- Staff Training:
- All developers trained on ethical AI principles
- Regular updates on AI governance best practices
- Security awareness training for all staff
- GDPR and privacy training for customer-facing teams
2. Determining the Level of Human Involvement in AI-Augmented Decision-making
Appropriate Degree of Human Involvement:
- High-Risk Decisions: Require human review (e.g., content moderation policies, customer data handling)
- Medium-Risk Decisions: Human oversight available (e.g., AI response customization, model selection)
- Low-Risk Decisions: Automated with human monitoring (e.g., routine conversation responses, FAQ handling)
Minimizing Risk of Harm:
- Content filtering to prevent harmful or inappropriate responses
- Human escalation paths for complex or sensitive queries
- Customer control over AI behavior and response style
- Regular review of AI outputs for potential issues
3. Operations Management
Minimizing Bias in Data and Models:
- Anonymous data architecture prevents demographic bias
- Regular audits of AI outputs for fairness across different query types
- Diverse training data to ensure broad representation
- Bias detection testing before deploying new models
Risk-Based Approach to Quality Measures:
- Explainability: Documentation of AI decision-making processes
- Robustness: Regular testing across diverse scenarios and edge cases
- Regular Tuning: Continuous model improvement based on performance metrics
- Monitoring: Monitoring of AI system performance and errors
- Quality Assurance:
- Automated testing of AI responses
- Human review of sample conversations
- Customer feedback integration
- Performance metrics tracking (accuracy, response time, user satisfaction)
4. Stakeholder Interaction and Communication
- Making AI Policies Known to Users:
- Public Ethical AI Statement (this document)
- Clear privacy policy explaining data practices
- Trust Centre with transparency about AI capabilities
- Customer documentation on AI configuration options
- Allowing Users to Provide Feedback:
- Customer support channels for AI-related questions
- Feedback mechanisms in the ChatBar AI Dashboard
- Regular customer surveys on AI performance
- Bug reporting and feature request systems
- Making Communications Easy to Understand:
- Plain language documentation (no jargon)
- Visual guides and tutorials
- Clear explanations of AI capabilities and limitations
- Transparent about what the AI can and cannot do
Generative AI Governance (2024 Framework)
ChatBar AI also aligns with Singapore’s Model AI Governance Framework for Generative AI (2024), which outlines nine key dimensions:
1. Accountability
Our Approach:
- Clear ownership: CTO and Lead Developer are accountable for AI system design and performance
- Documented decision-making processes
- Escalation procedures for AI-related issues
- Regular reporting to leadership on AI governance
2. Data
Our Approach:
Privacy-first: Anonymous data architecture with no PII collection
- Data minimization: Only collect data necessary for service delivery
- Data quality: Regular audits to ensure training data quality
- Data protection: Encryption, access controls, and GDPR compliance
3. Trusted Development
Our Approach:
- Human-centric design: Every line of code designed and reviewed by experienced programmers
- Security-first: Secure development lifecycle with code reviews and testing
- Version control: All changes tracked and documented
- Quality assurance: Rigorous testing before deployment
4. Incident Reporting
Our Approach:
- Incident response procedures: Documented processes for AI-related incidents
- Customer notification: Timely communication if incidents affect customers
- Root cause analysis: Investigation and remediation of incidents
- Continuous improvement: Lessons learned integrated into development
5. Testing
Our Approach:
- Pre-deployment testing: Comprehensive testing before releasing new AI features
- Bias and fairness testing: Regular audits for discriminatory outputs
- Performance testing: Accuracy, response time, and reliability metrics
- User acceptance testing: Customer feedback before major releases
6. Security
Our Approach:
- Encryption: TLS 1.2+ in transit, AES-256 at rest
- Access controls: Role-based access and multi-factor authentication
- Vulnerability management: Regular security assessments and patching
- Incident response: 24/7 security monitoring and rapid response
7. Content Provenance
Our Approach:
- Transparency: Clear about when users are interacting with AI vs. human support
- Attribution: AI-generated responses are identifiable
- Audit trails: Logging of AI interactions for accountability
- Version tracking: Documentation of AI model versions in use
8. Safety
Our Approach:
- Content filtering: Prevention of harmful, inappropriate, or illegal content
- Safety guardrails: Built-in limits on AI behavior
- Human oversight: Escalation to human review for sensitive topics
- Continuous monitoring: Real-time detection of safety issues
9. AI for Good
Our Approach:
- Environmental responsibility: Carbon-neutral infrastructure
- Accessibility: AI tools available to businesses of all sizes
- Ethical practices: No discriminatory profiling or invasive tracking
- Social benefit: Helping businesses serve customers better while respecting privacy
Privacy-First AI Architecture
ChatBar AI’s unique privacy-first architecture inherently supports ethical AI principles:
Anonymous Data = Ethical AI by Design
Why this matters:
- No discriminatory profiling: Without personal identifiers, the AI cannot discriminate based on race, gender, age, or other protected characteristics
- Privacy protection: End users’ conversations remain truly private
- Reduced bias risk: Anonymous data prevents the AI from making assumptions about individuals
- Simplified compliance: Less personal data means less regulatory risk
What We Don’t Collect
Our AI systems do not process:
- Names or personal identifiers
- Email addresses or contact information
- IP addresses or device fingerprints
- Location data or geolocation
- Demographic information (age, gender, race, etc.)
- Browsing history or cross-site tracking
Result: Our AI cannot discriminate because it doesn’t know who it’s talking to.
Continuous Improvement
Regular Reviews
We conduct regular reviews of our AI governance practices:
- Quarterly: AI performance metrics and bias audits
- Semi-annually: Security assessments and cybercrime trend analysis
- Annually: Comprehensive AI governance framework review
- Ongoing: Customer feedback integration and incident analysis
Staying Current
We monitor developments in AI governance globally:
- Singapore: PDPC Model AI Governance Framework updates
- European Union: EU AI Act and related regulations
- International: OECD AI Principles, UNESCO AI Ethics recommendations
- Industry: Best practices from leading AI companies and researchers
- Media: Our own VSP Consortium tracks AI scraping and other contentious issues
Commitment to Transparency
We will update this Ethical AI Statement as our practices evolve and as new governance frameworks emerge. Material changes will be communicated to customers at least 30 days in advance.
Stakeholder Engagement
For Customers
You have the right to:
- Understand how our AI systems work
- Configure AI behavior to meet your needs
- Provide feedback on AI performance
- Request human review of AI decisions
- Access documentation and support
How to engage:
- Email: ai-ethics@chatbar-ai.com
- Dashboard: Submit feedback via ChatBar AI Dashboard
- Support: Contact our support team for AI-related questions
For End Users (Your Customers)
ChatBar AI is designed to:
- Provide helpful, accurate responses
- Respect privacy (no personal data collection)
- Escalate to human support when needed
- Operate transparently (users know they’re talking to AI)
If you configure ChatBar AI to collect personal information:
- You are responsible for obtaining appropriate consent
- You must provide privacy notices to your users
- You must honor data subject rights requests
- You must comply with applicable privacy laws
For Regulators and Auditors
We welcome engagement with:
- Data protection authorities
- AI governance bodies
- Independent auditors
- Academic researchers
We can provide:
- Detailed documentation of AI governance practices
- Evidence of compliance with MAIG principles
- Security certifications and audit reports
- Incident response procedures and records
- Contact: compliance@chatbar-ai.com
Accountability and Contact
Responsible Parties
Overall AI Governance:
- Mistry, Chief Compliance Officer
Email: c.mistry@chatbar-ai.com
Technical Accountability:
Brod Justice, Co-Founder, CTO & Lead Full-Stack Developer
Email: brod.justice@chatbar-ai.com
AI Ethics Inquiries:
Email: ai-ethics@chatbar-ai.com
Response time: 5 business days
Security Incidents:
Email: security@chatbar-ai.com
Response time: 24 hours (1 hour for critical incidents)
Additional Resources
- Trust Centre: chatbar-ai.com/built-for-trust
- Privacy Policy: chatbar-ai.com/privacy
- Security Documentation: chatbar-ai.com/security
- Terms of Service: chatbar-ai.com/terms
External References:
- Singapore Model AI Governance Framework
- Model AI Governance Framework for Generative AI (2024)
- AI Verify Foundation
Document Version: 1.0
Last Updated: October 30, 2025
Next Review: January 30, 2026
Framework Alignment: Singapore Model AI Governance Framework (Second Edition, 2020) & Model AI Governance Framework for Generative AI (2024).
This Ethical AI Statement demonstrates ChatBar AI’s commitment to responsible AI development and deployment in accordance with Singapore’s Model AI Governance Framework. For binding contractual terms, please refer to your executed agreement with ChatBar AI Pte Ltd.