AI Policy, Principles & Usage Standards

Key steps

When you’re ready to produce your company AI guidelines, here are some steps you can follow to plan, create, and implement your protocols.

  1. Key steps
    1. Establish a working group of broad stakeholders
    2. Define AI, Purpose & Scope
    3. Establish your core AI principles
    4. Assess AI Risks and Compliance needs
    5. Set accountability
    6. Design monitoring, control and review procedures
    7. Plan & Implement
    8. Develop best practice standards & training
  2. What is out there today?
    1. Google AI Principles
      1. What is allowed?
      2. What is not allowed?
    2. OECD AI Principles
      1. Values-based principles
      2. Recommendations
    3. Spark NZ AI Principles
  3. Wrap Up

There are some great guidelines out there for this process. Below are the two extremes from corporate governance to startups but follow a similar pattern. Below is a summary of a path that you may be able to follow.

Establish a working group of broad stakeholders

Create a group of people to lead the development of your AI policy, principles and usage standards. This working group should include leadership and may even include board members, executives, and department heads, as well as relevant technical experts. They should have a foundational understanding of AI and its benefits and risks (or may require this education as the first step).  You need to gather diverse insights and ensure the policy addresses all relevant aspects.

Define AI, Purpose & Scope

Create a common understanding of what is AI. It’s important that you create a definition of AI that suits your organisation as this is the foundation for an AI policy, principles or standard so your teams are clear on what is considered AI and what is not.

Identify the ways in which you will use AI in your operations or develop AI technologies. Define the overarching goals of AI use within your organisation and the boundaries of its application (including what you won’t do).

Within the scope, you should be clear on the components of what is required – for example, policies, principles, standards, and/or procedures. This will also help determine who is required to help.

Establish your core AI principles

Determine how your company’s core principles will relate to AI use and development. These principles will inform the ethical foundation of your AI usage policy. Some initial focus areas should be on autonomy, bias, security, and privacy.

Assess AI Risks and Compliance needs

Learn about the legislation that govern AI tools and ensure that your guidelines are compliant with these regulations. Work to draft legislation or guidance to design with future compliance in mind as much as possible.

Identify potential risks associated with AI usage and ensure compliance with existing policies, legal and ethical standards. Ethical principles that will guide the development and deployment of AI, such as fairness, transparency, and privacy.

There are some great frameworks out there and our preferred starting point is the EU AI Act categorises AI systems into four risk levels:

  1. Unacceptable Risk: Applications that considered a threat to people.
    These comprise subliminal techniques, exploitative systems or social scoring systems are banned. Also prohibited are any real-time remote biometric identification systems (e.g. such as facial recognition) with some exceptions for law enforcement purposes.

    These are banned, unless permitted in rare cases for law enforcement purposes or with a court order.
  2. High Risk: These are AI systems that are subject to significant regulatory obligations – primarily products that meet safety laws and software used to manage human facilities or health.

    This requires enhanced thresholds of diligence, initial risk assessment, and transparency. The technology itself will need to comply with certain requirements – including around risk management, data quality, transparency, human oversight, and accuracy.
  3. Limited Risk: These are AI systems will have to comply with transparency requirements, copyright law, showing the content was generated by AI and designs to prevent it from generating illegal content.
  4. Minimal or No Risk: These AI systems likely have the least regulatory oversight.

There is also a potential category for Systemic Risk for high-impact general-purpose AI models like ChatGPT-4. This is inline with any system (banking, insurance or infrastructure).

Each risk level has corresponding regulatory requirements to ensure that the level of oversight is appropriate to the risk level. The Act adopts a risk-based approach, aiming to boost public confidence and trust in technology.

Set accountability

Create a structure for decision-making, accountability, and oversight of AI systems.

Determine who will be responsible for which stages of selection, use, development and monitoring of AI tools in your organisation. Create processes for reporting and governance so that your usage guidelines are followed as intended.

Design monitoring, control and review procedures

Establish procedures for monitoring AI performance and managing deviations from the policy. These procedures will enable you to monitor how your AI systems are performing and following your agreed policies. You should also regularly evaluate your policies to ensure they remain effective at mitigating risks and evolve with changes in processes and technology. 

Plan & Implement

When the guidelines are complete, they need to be communicated to all stakeholders clearly, so all understand them (with a potential test). Provide training for employees on responsible AI usage and the implications of the AI policy. Given the impacts across most workforces, this could incorporate the full company as AI users with a focus group for AI designers or implementors.

Once this is done you should have a process to monitor and update the policy.

Develop best practice standards & training

Create standards for using AI while mitigating risk and balancing human and artificial intelligence appropriately. Outline key scenarios, for example customer risks, to help teams understand what is required.

What is out there today?

Google AI Principles

Google have a very clear set of principles outlined on their AI site: https://ai.google/responsibility/principles/

Below is a summary based on the website as at 23 March 2024. The structure is very easy to understand and read with broad outcomes that they want to focus on and also outline where they will not participate (including acknowledging this has and will change as they learn more).

  • Be socially beneficial.
  • Avoid creating or reinforcing unfair bias.
  • Be built and tested for safety.
  • Be accountable to people.
  • Incorporate privacy design principles.
  • Uphold high standards of scientific excellence.
  • Work to limit potentially
    harmful or abusive applications.
  • Cause or are likely to cause overall harm.
  • Weapons or other technologies whose principal purpose is to cause injury.
  • Surveillance violating internationally accepted norms.
  • Contravenes widely accepted principles of international law and human rights.
  • This list may evolve.

Google also publish their principles back to 2019, which have largely stayed consistent.

OECD AI Principles

Below is a summary based on the website (https://oecd.ai/en/ai-principles) as at 23 March 2024.

The OECD AI Principles promote use of AI that is innovative and trustworthy and that respects human rights and democratic values. The principles represent a common aspiration that can shape a human-centric approach to trustworthy AI.

  • Inclusive growth, sustainable development and well-being
  • Human-centred values and fairness
  • Transparency and explainability
  • Robustness, security and safety
  • Accountability
  • Investing in AI R&D
  • Fostering a digital ecosystem for AI
  • Providing an enabling policy environment for AI
  • Building human capacity and preparing for labour market transition
  • International co-operation for trustworthy AI

As an organisation focussed on policy development, they are focussed more on the recommendations and actions that countries and policymakers should take. Their recommendations highlight areas that a company also needs to focus on around investment, delivering on a digital ecosystem, outlining policies and providing upskilling and transition for current team members.

Spark NZ AI Principles

Spark have committed and made their principles public which shows great leadership within NZ corporates. There policy is available here:

Spark NZ AI Principles (December 2023).pdf

Artificial Intelligence (AI) technologies are evolving and being deployed at scale. These
technologies have increasingly sophisticated capabilities, some of which can directly impact
people or influence their behaviours, opinions, and choices.

  • Human centred
  • Ethical design
  • Diversity, inclusivity, and bias
  • Safety and reliability
  • Privacy
  • Informed human decision making
  • Explicability and transparency

Interestingly unlike Google, Spark do not outline what they will not do, more how they will be in line with their ethics and requirements.

Wrap Up

There are many resources available as a starting point for getting this underway at your company – you just need to start. Make sure that you engage broad stakeholders to give you great inputs on the future needs of your business to deliver on responsible AI (or no AI if that’s the business decision and risk appetite). We look forward to seeing many more companies in New Zealand publish their approach and share with their customers and key stakeholders for a more transparent future for us all.

Changes to policies with AI

There may be a number of policies across your business that may need to be updated, depending on the type of industry you are in and the organisation’s stance on ethics and regulatory needs.

Here is a table of some major updates that leading technology organisations have done to improve their policies, standards and guidelines for customers, suppliers and their own teams. This provides a breadth of where this could impact your organisation.

CompanyChanges made
GoogleGoogle has updated its advertising policy to require political ads using generative AI and deepfakes to clearly disclose that synthetic media is present.
MicrosoftMicrosoft works to map, measure, and manage risk and apply multi-layered governance that embeds robust checks on processes and outcomes.
AmazonAmazon has introduced new rules and guidance for Kindle books generated by artificial intelligence tools, including the requirement that authors inform it when content is AI-generated.
IBMIBM is focused on helping its enterprise customers train and deploy generative AI models while keeping data privacy and regulatory requirements in mind.
OpenAIOpenAI requires that consumer-facing uses of their models in medical, financial, and legal industries; in news generation or news summarization; and where else warranted, must provide a disclaimer to users informing them that AI is being used and of its potential limitations.

Please note that these are just summaries. For more detailed information, you can refer to the respective sources.

Department Impacts

Sure, here are some internal policies that companies should review for AI implications across each major department:

People: Policies should focus on data privacy, accuracy & appropriateness of AI predictions, and the ethical use of AI in employee management. It’s also important to consider the implications of AI on future workforce, change management for adoption and shifting organisation needs. Change standards to design AI using modern learning principles.

Further reading:

The impact of generative AI on human resources | McKinsey

Ethical AI: guidelines and best practices for HR pros – Workable

The Role of Change Management When Implementing AI – Salesforce Canada Blog

Technology: Policies should address the use of AI in risk management, legal and compliance issues, and data loss prevention. 

Importance of Internal AI Policies at Work

6 best practices to develop a corporate use policy for generative AI | CIO

Marketing: Implementing AI is not without its challenges, and marketing leaders must take a strategic approach to maximise it value while minimising customer risks. Policies should be updated to reflect Customer Privacy, Content Guidelines and Disclaimers on digital platforms. They should also focus on customer aspects related to technical robustness, safety, privacy, data governance, transparency, diversity, non-discrimination, fairness, societal and environmental well-being, and accountability.

Marketing Operations In The Age Of AI

Finance: Policies should emphasise humans-in-the-loop, align AI models to a shared vision for finance. Prioritise strengthening trust and safety to develop finance-specific guidelines and guardrails.

How generative AI will impact the finance department – Digital Nation

Implications of Generative AI in Finance – Deloitte

Please note that these are just general guidelines. The specific policies would depend on your company’s environment.

Disclaimers for AI

It’s always important to use AI tools responsibly, with an understanding of their limitations and reminders for users. Disclaimers for AI are important for several reasons:

  1. Accuracy: AI systems learn from data and their responses are based on patterns they’ve learned. They are likely to not always be 100% accurate. The disclaimer helps users understand that there may be errors or inaccuracies in the AI’s responses.
  2. Data Privacy: AI systems often process large amounts of data. A disclaimer can inform users about what data is collected and how it’s used, ensuring transparency around data privacy.
  3. Limitations: AI has its limitations. It doesn’t have feelings, consciousness, or the ability to understand context in the same way humans do. A disclaimer helps set realistic expectations about what the AI can and can’t do so when an ambiguous or unhelpful response is provided.
  4. Liability: Disclaimers can protect the provider of the AI service from legal liability by informing users that the AI’s responses should not be relied upon for critical decisions unaided.
  5. Ethics: AI should be used responsibly and a disclaimer can remind users of this and provide guidelines for ethical use.

Given many AI is integrated into solutions, a good summary should be provided where users are interacted with AI so it needs to be quite snappy with links to more details, for example a privacy policy and terms of use. Looking at a few major examples, helps guide the design of your own AI disclaimers.

Microsoft

Copilot is powered by AI, so surprises and mistakes are possible.

Terms of use | Privacy Policy

✓ Simple

✓ More detail

✗ Request feedback

✗ No warning on sensitive info

Outlines the key areas

  • Copilot – the product that you are using
  • powered by AI – letting you know it is AI
  • surprises and mistakes are possible – outlining the risks of inaccuracy and limitations
  • Terms of use, Privacy Policy – links to more details

Google

Bard may display inaccurate or offensive information that doesn’t represent Google’s views. Bard Privacy Notice

I have limitations and won’t always get it right, but your feedback will help me to improve.

Human reviewers may process your Bard conversations for quality purposes. Don’t enter sensitive info. Learn more

✓ More comprehensive

✓ Feedback

✓ Not to use sensitive info

✗ No mention of AI

✗ Repeated

Outlines the key areas

  • Bard– the product that you are using (although in one instance uses ‘I’)
  • Makes no explicit mention letting you know it is using AI
  • may display inaccurate or offensive information – outlining the risks of inaccuracies and limitations
  • Bard Privacy Policy and Learn more – links to more details

It extends far broader than Technology Solutions requiring disclaimers for the use of AI. There are numerous examples across many professions from Law to Political Parties and Advertisers.

The disclaimers in this post were collected on 2 November 2023 and are subject to change on the source websites.