We Are Now Live! Join as an early adopter and access premium features free for 30 90 days.

uRecruits Inc. - Corporate AI Policy (US Focused)

1. Purpose

This policy defines the ethical, legal, and operational principles guiding the development and deployment of AI systems at uRecruits Inc.

As an AI-powered recruitment automation platform, uRecruits Inc. is committed to ensuring that its technologies are used responsibly, transparently, and in compliance with U.S. laws and industry best practices.

2. Scope

This policy applies to:

  • All AI agents, models, tools, workflows, and supporting infrastructure developed or integrated by uRecruits Inc.
  • All employees, contractors, and vendors interacting with uRecruits Inc. AI systems.
  • All U.S.-based users and customers engaging with uRecruits Inc. services.

3. Core Principles

3.1 Ethical Usage
  • AI must be designed to uphold fairness, accountability, and transparency.
  • Human oversight is mandatory in all high-stakes workflows (e.g., hiring decisions).
  • AI outputs must avoid perpetuating bias or discrimination.
3.2 Transparency
  • Users must be informed when interacting with AI systems.
  • Explanations for decisions (e.g., candidate rankings) must be available upon request.
  • Disclosures regarding AI-generated content must be clear and accessible.
3.3 Accountability
  • Final decision-making authority lies with human users. Human oversight is enforced through mandatory review checkpoints in all high-impact workflows, including but not limited to candidate rejection, offer generation, and final selection decisions.
  • Automated actions are routed to designated HR reviewers for validation, with clear audit trails logged for each decision.
  • Logs must be maintained for all AI-driven actions and decisions.
  • Engineers and product owners are accountable for monitoring and tuning AI performance.

4. Data Privacy & Security (U.S.-Focused)

4.1 Legal Compliance
  • Complies with U.S. data protection laws including:
    • California Consumer Privacy Act (CCPA)
    • New York SHIELD Act
    • Federal Trade Commission (FTC) guidelines
4.2 Consent & Disclosure
  • Informed consent is required for data processing and AI usage.
  • Consent is obtained explicitly during the job application process through user interface and clearly worded consent forms.
  • Users are informed about the scope and purpose of AI involvement, and mechanisms are provided to allow candidates to withdraw consent at any time, in alignment with applicable state-level privacy laws such as CCPA.
  • Privacy notices must include AI-related activities and data retention terms.
4.3 Data Minimization & Storage
  • Only essential data is collected and stored.
  • Data must be anonymized where possible and encrypted both in transit and at rest.
  • Data retention limits are defined and enforced through policy.
4.4 Security Protocols
  • Role-based access control (RBAC), MFA, and endpoint security are enforced.
  • AI systems undergo routine security audits.

5. System Design & Risk Management

5.1 Architecture and Agent Behavior
  • Agent memory is structured and scoped to minimize context leakage.
  • Planning, task decomposition, and feedback loops are built into agents for continuous improvement.

5.2 Risk Identification

  • Potential risks such as bias, error propagation, or tool misuse are identified through internal risk assessments.
  • Red team exercises are conducted annually.
5.3 Model Audits and Testing
  • Models are regularly tested for fairness, accuracy, and bias through a documented quarterly testing cycle.
  • Each cycle includes performance benchmarking, demographic fairness checks, and accuracy validation using labeled datasets.
  • Review procedures are documented and maintained by the AI Governance Council, and findings are incorporated into ongoing model tuning and retraining plans.
  • Retraining schedules and thresholds for drift are documented.

6. Monitoring, Auditing, and Governance

6.1 Monitoring
  • Continuous monitoring is deployed across high-impact workflows.
  • Performance and behavior of agents are tracked through observability tools.
6.2 Auditing
  • Quarterly audits are conducted on hiring decisions, ranking outcomes, and user feedback.
  • Logs are retained and made available for compliance and ethical review.
6.3 Governance Council
  • An internal AI Governance Council oversees all AI development.
  • Council responsibilities include:
    • Reviewing new AI use cases
    • Approving third-party tools and datasets
    • Resolving ethical or legal concerns

7. Training & Awareness

7.1 Staff Education
  • All employees undergo annual training on AI ethics, data privacy, and compliance.
  • Completion of this training is mandatory and tracked via the internal learning management system.
  • Training effectiveness is evaluated through post-training assessments, feedback surveys, and regular audits of behavior and decision-making in AI-related workflows.
  • Product teams receive quarterly updates on changes in regulations or AI standards.
7.2 Escalation Procedures
  • A clear process exists for employees to report concerns or unintended consequences related to AI.
  • Reports are investigated by the Governance Council within 10 business days.

8. Intellectual Property and AI-Generated Content

8.1 Ownership and Attribution
  • All AI-generated content is reviewed and attributed appropriately.
  • IP ownership remains with uRecruits Inc. unless contractually reassigned.
8.2 External Use and Review
  • AI-generated content used in marketing, documentation, or assessments must pass human review before publishing.
  • No AI-generated content may replicate or mimic third-party content without explicit rights.

9. Social and Employment Impact

  • uRecruits Inc. considers the broader social implications of its AI systems.
  • All hiring-related agents are designed to promote equity and reduce bias.
  • Candidate experience, fairness, and inclusion are monitored and evaluated using defined KPIs such as:
    • Demographic distribution in applicant flow
    • Offer rate parity across groups
    • Candidate satisfaction scores
    • Audit flags on screening decisions
  • These metrics are reviewed quarterly by the AI Governance Council and used to drive continuous improvements in model tuning and user experience design.

10. Policy Maintenance

  • This policy is reviewed and updated biannually.
  • Major revisions will be approved by the executive leadership and the AI Governance Council.

Applies To: All AI systems, employees, and contractors operating within or targeting the United States market

For questions or compliance support, contact: compliance@urecruits.com

Version: 1.0

Effective Date: June 5, 2025

Menu

Close icon