AI Policy

At Ink, we believe that generative artificial intelligence (Gen AI) can be a powerful tool for speed and efficiency – and even an aid to human creativity. But, without careful use, it brings significant risks. This policy outlines how and when we use Gen AI, ethically and responsibly. It applies to everyone at Ink and any associates we work with.


Our principles

Humans have the first – and final – word
Gen AI is a creative tool, not a replacement for human writers. All research, concepts and words we produce will always be led by humans and carefully checked by a professional.  

 

No smoke and mirrors
We’ll always be transparent with our clients about how and when we use Gen AI. We’ll ask them to approve our use of AI and let them know when AI has been used in content creation.

 

Always hallucination-free, high quality

We’ll check all outputs carefully – reading and re-reading everything for accuracy, relevance, originality, tone of voice and messaging. AI is no excuse for incorrect and/or average results.

 

Ethical awareness

We’ll read everything with an awareness of AI’s bias – whether temporal, cultural, gender, racial or algorithmic. We’ll carefully check our content to ensure it doesn’t misrepresent, misinform or plagiarise.

 

Data privacy assured

We only use AI tools that we pay for – so they comply with our data protection regulations. No confidential or sensitive client data will be shared with AI platforms, unless authorised by our client.

 

Continual improvement
AI is rapidly changing, so we’ll hold three- to six-monthly training to ensure our team is up to speed with the latest thinking on its capabilities, limitations and any ethical considerations.

 

AI tools

With your permission, we may use the following AI tools:

  • ChatGPT (Plus / Business)

  • Notta (Business Plan)

  • Claude (Pro Plan)

  • Jasper (Business Plan)

  • Perplexity

  • Gemini (Business Plan)

Accountability and monitoring

A designated AI officer within Ink will oversee adherence to this policy, periodically review AI outputs and manage compliance. We encourage all team members to report any ethical concerns or violations immediately.

 

This policy is subject to annual review to reflect technological developments, industry standards and stakeholder feedback.

When will we use AI
(with client permission)

  • Ideation and brainstorming

  • Grammar and style checks

  • Content optimisation (SEO suggestions, readability improvements)

  • Market research and insight gathering

  • Tone and consistency checks

 

 When won’t we use AI?

 

  • Strategic thinking and messaging

  • Tone of voice direction

  • Generating content – without considered and extensive human input

  • When our clients ask us not to