Changes to policies with AI

There may be a number of policies across your business that may need to be updated, depending on the type of industry you are in and the organisation’s stance on ethics and regulatory needs.

Here is a table of some major updates that leading technology organisations have done to improve their policies, standards and guidelines for customers, suppliers and their own teams. This provides a breadth of where this could impact your organisation.

CompanyChanges made
GoogleGoogle has updated its advertising policy to require political ads using generative AI and deepfakes to clearly disclose that synthetic media is present.
MicrosoftMicrosoft works to map, measure, and manage risk and apply multi-layered governance that embeds robust checks on processes and outcomes.
AmazonAmazon has introduced new rules and guidance for Kindle books generated by artificial intelligence tools, including the requirement that authors inform it when content is AI-generated.
IBMIBM is focused on helping its enterprise customers train and deploy generative AI models while keeping data privacy and regulatory requirements in mind.
OpenAIOpenAI requires that consumer-facing uses of their models in medical, financial, and legal industries; in news generation or news summarization; and where else warranted, must provide a disclaimer to users informing them that AI is being used and of its potential limitations.

Please note that these are just summaries. For more detailed information, you can refer to the respective sources.

Department Impacts

Sure, here are some internal policies that companies should review for AI implications across each major department:

People: Policies should focus on data privacy, accuracy & appropriateness of AI predictions, and the ethical use of AI in employee management. It’s also important to consider the implications of AI on future workforce, change management for adoption and shifting organisation needs. Change standards to design AI using modern learning principles.

Further reading:

The impact of generative AI on human resources | McKinsey

Ethical AI: guidelines and best practices for HR pros – Workable

The Role of Change Management When Implementing AI – Salesforce Canada Blog

Technology: Policies should address the use of AI in risk management, legal and compliance issues, and data loss prevention. 

Importance of Internal AI Policies at Work

6 best practices to develop a corporate use policy for generative AI | CIO

Marketing: Implementing AI is not without its challenges, and marketing leaders must take a strategic approach to maximise it value while minimising customer risks. Policies should be updated to reflect Customer Privacy, Content Guidelines and Disclaimers on digital platforms. They should also focus on customer aspects related to technical robustness, safety, privacy, data governance, transparency, diversity, non-discrimination, fairness, societal and environmental well-being, and accountability.

Marketing Operations In The Age Of AI

Finance: Policies should emphasise humans-in-the-loop, align AI models to a shared vision for finance. Prioritise strengthening trust and safety to develop finance-specific guidelines and guardrails.

How generative AI will impact the finance department – Digital Nation

Implications of Generative AI in Finance – Deloitte

Please note that these are just general guidelines. The specific policies would depend on your company’s environment.

Disclaimers for AI

It’s always important to use AI tools responsibly, with an understanding of their limitations and reminders for users. Disclaimers for AI are important for several reasons:

  1. Accuracy: AI systems learn from data and their responses are based on patterns they’ve learned. They are likely to not always be 100% accurate. The disclaimer helps users understand that there may be errors or inaccuracies in the AI’s responses.
  2. Data Privacy: AI systems often process large amounts of data. A disclaimer can inform users about what data is collected and how it’s used, ensuring transparency around data privacy.
  3. Limitations: AI has its limitations. It doesn’t have feelings, consciousness, or the ability to understand context in the same way humans do. A disclaimer helps set realistic expectations about what the AI can and can’t do so when an ambiguous or unhelpful response is provided.
  4. Liability: Disclaimers can protect the provider of the AI service from legal liability by informing users that the AI’s responses should not be relied upon for critical decisions unaided.
  5. Ethics: AI should be used responsibly and a disclaimer can remind users of this and provide guidelines for ethical use.

Given many AI is integrated into solutions, a good summary should be provided where users are interacted with AI so it needs to be quite snappy with links to more details, for example a privacy policy and terms of use. Looking at a few major examples, helps guide the design of your own AI disclaimers.

Microsoft

Copilot is powered by AI, so surprises and mistakes are possible.

Terms of use | Privacy Policy

✓ Simple

✓ More detail

✗ Request feedback

✗ No warning on sensitive info

Outlines the key areas

  • Copilot – the product that you are using
  • powered by AI – letting you know it is AI
  • surprises and mistakes are possible – outlining the risks of inaccuracy and limitations
  • Terms of use, Privacy Policy – links to more details

Google

Bard may display inaccurate or offensive information that doesn’t represent Google’s views. Bard Privacy Notice

I have limitations and won’t always get it right, but your feedback will help me to improve.

Human reviewers may process your Bard conversations for quality purposes. Don’t enter sensitive info. Learn more

✓ More comprehensive

✓ Feedback

✓ Not to use sensitive info

✗ No mention of AI

✗ Repeated

Outlines the key areas

  • Bard– the product that you are using (although in one instance uses ‘I’)
  • Makes no explicit mention letting you know it is using AI
  • may display inaccurate or offensive information – outlining the risks of inaccuracies and limitations
  • Bard Privacy Policy and Learn more – links to more details

It extends far broader than Technology Solutions requiring disclaimers for the use of AI. There are numerous examples across many professions from Law to Political Parties and Advertisers.

The disclaimers in this post were collected on 2 November 2023 and are subject to change on the source websites.