Guide: AI and Data Privacy Compliance in Hong Kong

alfred leung YTL LLP

The rapid integration of Artificial Intelligence (AI) systems into corporate and fintech operations presents significant opportunities alongside complex legal and reputational risks. In Hong Kong, the Personal Data (Privacy) Ordinance (PDPO) provides the foundational regulatory regime, and the Privacy Commissioner for Personal Data (PCPD) has issued the “Artificial Intelligence: Model Personal Data Protection Framework” (the “Framework”) to provide practical, risk-based guidance for organizations procuring, implementing, and using AI systems that involve personal data. 

This guide analyses the Framework, and outlines some of the essential steps that fintech founders and corporations should consider taking to align their AI initiatives with binding legal obligations under the PDPO.

Part I: Core Governance and Strategy for AI Adoption

Compliance starts with top-level commitment and a robust governance structure.

  • Establish AI Strategy & Inventory: Top management must define the organization’s ethical principles for AI and identify unacceptable uses. An AI inventory should be established to facilitate governance measures.
  • Governance Structure: Establish an AI Governance Committee (or similar body) reporting to the board. This committee must be led by a C-level executive and composed of a cross-functional team with expertise in law, compliance (including the Data Protection Officer), data science, and cybersecurity.
  • Procurement Due Diligence: When sourcing AI solutions, ensure contracts clearly define Data User and Data Processor roles under the PDPO.
  • Data Processor Agreements: Organizations must adopt contractual or other means to prevent unauthorized access, processing, erasure, loss, or use of personal data by the AI supplier/data processor (DPP 4(2)).
  • Cross-border Transfer: If the AI solution involves processing personal data outside Hong Kong (e.g., cloud platforms), the organization (as Data User) must ensure compliance with PDPO requirements for data transfer.
  • Training: Provide adequate training to all relevant personnel, especially human reviewers and legal/compliance teams, on data protection laws, internal AI policies, and the risks associated with AI systems.

Part II: Risk Assessment and Human Oversight (The Risk-Based Approach)

A mandatory step in the AI lifecycle is conducting a comprehensive, continuous risk assessment.

  • Risk Assessment Focus: The assessment must systematically identify, analyze, and evaluate risks, particularly those related to data privacy and ethical concerns. Key factors to consider include:
  • Data Volume, Sensitivity, and Quality: Is the amount of data adequate but not excessive (DPP 1)? Is the data accurate (DPP 2)? Does it include sensitive personal data (e.g., financial data, biometric data)?
  • Potential Impact: Assess the severity and duration of the impact on individuals’ legal rights, employment, financial prospects, and eligibility for services.
  • Determining Oversight Level: The level of human intervention must be proportionate to the identified risk.
  • High-Risk Systems: Require a “human-in-the-loop” approach, where human actors retain control over the decision-making process to mitigate errors or improper output. Examples include assessing creditworthiness or evaluating job applicants.
  • Mitigation Trade-offs: Document the rationale for balancing competing ethical criteria, such as prioritizing explainability over high predictive accuracy where a decision significantly affects a customer.

Part III: Data Handling and System Management

The focus is on the technical and operational controls to ensure reliability and security.

  • Data Minimization (DPP 1): Organizations should use techniques such as anonymized, pseudonymised, or synthetic data to customize models where appropriate, ensuring that only necessary data is used.
  • Testing and Validation: Rigorous testing is required before deployment, especially for customized models, to ensure reliability, robustness, and fairness.
    • Test for fairness and accuracy using appropriate metrics.
    • Test with “holdout” data to ensure the model does not overfit its training set.
    • Implement controls to prevent personal data leakage in AI-generated output.
  • Continuous Monitoring & Security:
    • Monitor and log inputs/prompts to prevent misuse and facilitate audits, in accordance with the data minimization principle.
    • Monitor for “model drift” or “model decay” (degradation in accuracy) and fine-tune or re-train the model as necessary.
    • Establish an AI Incident Response Plan to promptly suspend the system and trigger fallback solutions in case of error or failure.
    • Implement security measures (e.g. red teaming) to minimise the risk of adversarial attacks (e.g. data poisoning).
    • Ensure traceability and auditability of the AI system’s output (e.g. by logging events).

Part IV: Transparency and Stakeholder Engagement

Effective communication is key to building trust.

  • Information Provision (DPP 1 & 5): Organizations must clearly and prominently disclose the use of AI systems.
    • Data subjects must be informed of the purposes for which their personal data are used (e.g., AI training) and the classes of persons (e.g., AI suppliers) to whom data may be transferred.
  • Explainable AI (XAI): For systems with significant individual impact, organizations should, where feasible, explain:
    • The extent of AI involvement in the decision-making process.
    • How personal data was used and why it was relevant.
    • The major factors leading to the individual decision/output (local explainability).
  • Data Subject Rights & Feedback: Organizations must establish channels for individuals to provide feedback, seek explanations, and request human intervention. They must also support data subjects’ rights to data access and correction (DPP 6).
  • Language: Communication with stakeholders must be in plain language that is clear and understandable to lay persons.

Some of the common questions for fintech founders and legal teams

Set out below are some of the questions arising from the AI Model Personal Data Protection Framework.

Questions

Answers

How can our fintech startup minimize privacy risk in AI training without sacrificing accuracy?

The paramount principle is Data Minimization (DPP 1). Prioritize the use of anonymized, pseudonymized, or synthetic data for model development. Only collect and process personal data that is strictly necessary, adequate, and not excessive for the defined, lawful purpose.

Our AI automates credit decisions. What level of human oversight is required?

Credit assessment is a high-risk application with a material impact on individuals. You must implement a “human-in-the-loop” model where a qualified human reviewer retains control over the final decision, with the authority to investigate and override the AI’s output.

We plan to use an off-the-shelf AI model. Do we still need an AI Governance Committee?

An AI Governance Committee is critical for the responsible use of all AI systems, regardless of their origin. This committee oversees the procurement process, validates the vendor’s compliance, and manages the ongoing use and risk of the system within your organization.

What contractual obligation must we impose on an AI supplier regarding data security?

You must contractually bind the supplier, as a Data Processor, to comply with DPP 4(2) (Security Principle). The contract must explicitly require the supplier to implement measures preventing unauthorized/accidental access, processing, erasure, loss, or use of personal data.

What specific information about our AI use must we disclose to customers?

You must disclose, under DPP 1(3) and DPP 5, the purpose of use (e.g., “for AI-driven credit scoring”) and the classes of persons to whom the data may be transferred (e.g., “our third-party AI model provider, XYZ Inc.”). This must be included in your PICS collection statement.

Key Action Points for Mitigating Legal Risks

To navigate the evolving regulatory landscape for AI in Hong Kong, corporations and fintech startups should immediately take the following actions:

  1. Formalize Governance: Establish a cross-functional AI Governance Committee with board-level oversight.

  2. Conduct DPIAs: Implement mandatory, AI-specific Data Protection Impact Assessments for all new and existing high-risk systems.

  3. Strengthen Contracts: Ensure all AI supplier agreements contain robust Data Processor clauses compliant with DPP 4(2).

  4. Enhance Transparency: Update privacy policies and collection statements to provide clear, plain-language disclosures about AI use.

  5. Implement Oversight: Integrate human-in-the-loop controls for any AI system whose decisions significantly impact individuals.

Contact our team today for a confidential consultation to ensure your activities are fully compliant with Hong Kong law.

best lawyer hong kong solicitor alfred leung

Alfred Leung, Partner

alfredleung@hkytl.com | +852 3468 7202

Complete the form below to arrange for a confidential consultation.