As EU AI discussions stall and then start again almost as quickly as Sam Altman left and then rejoined Open AI, the PRSA in the U.S. is weighing in on how to keep the use of AI in Public Relations compliant with good PR business practice.

As we mark the first birthday of ChatGPT, and as AI-generated copy gains popularity, it’s no surprise that the IBA pitching teams are beginning to see more and more industry-savvy journalists refusing to publish AI content, with some even employing AI online content checkers.

Now here at IBA, we’re strong advocates for keeping our content creative and that means human-led – so we have no worries about this new development. But for those dipping their toes into AI waters (I’m looking at you, Sports Illustrated), it’s time to make a U-turn back towards traditionally written content before you risk burning bridges with your favourite journalists!

It can sometimes be difficult to keep up-to-date with all things AI in the world of PR and Marketing given how quickly this technology is evolving – but not for our editorial teams. They recently spotted the newly published “The Ethical Use of AI For Public Relations Practitioners” guide, courtesy of the PRSA.

We were quick to rake through it with a fine tooth-come to discover the key takeaways on the business ethics of using AI.

1. Free flow of information: Yes, AI can streamline content creation and data interpretation efforts – but when used inappropriately or negligently, incorrect information can be disseminated. Before using any material generated by AI tools –  validate the sources of information, check for accuracy, and avoid using any AI- generated content that may be plagiarized to avoid falling down the fake news trap.

2. Disclosure of Information: As AI gains popularity, websites, social media, and other digital platforms can be flooded with incorrect AI-generated content. PR and Marketing pros need to remain vigilant and retain our core value of honesty in our content, and identify misinformation as and when we see or read it! Fess up if you used it – remember you or your company can’t copyright the copy.

3. Safeguarding client confidentiality – Oops is too late: Language models that feed AI chatbots rely on data sources, no matter the level of confidentiality – so stay vigilant when adding new information to your prompts! PR pros must exercise caution when using tools to manage sensitive data, such as client trade secrets or embargoed announcements. Consider if the benefit of using AI to digest the information outweighs the risk.

4. Uphold credibility – reputations are on the line: As we’ve previously shared, AI does pose a risk to the PR profession’s commitment to accuracy, transparency, and reputation – PR and Marketing pros need to educate themselves to critically think about the business ethics of working with AI, and also ensure they maintain transparency of how AI is used. It’s better to disclose that your content is AI generated than to let journalists find out themselves!

Remember, AI will always lack the curiosity, creativity & empathy of humans! If you need a reminder why IBA International will always be firmly on the human side of the great AI debate, here it is:

Not another blog on ChatGPT we hear you exclaim!

But we aren’t going to lecture you on how marketers can best use ChatGPT and incorporate it into their campaigns. This type of content is ten-a-penny at the moment – and probably written using ChatGPT!

As one commentator mentioned on my LinkedIn feed recently – “it never takes long for marketers to break the shiny new toys.”

There is a far bigger evolving market backdrop around ChatGPT and the growing number of competitive AI solutions. The future of Generative Pre-trained Transformer and Large Language Model AI tech is becoming a new technology battlefront.

The AI arms race heats up between tech giants

After ChatGPT was launched by San Francisco-based startup OpenAI in November of 2022 it soon came to light that the ‘start-up’ did indeed start up as a not for profit organization with a star studded list of benefactors from Tesla and SpaceX CEO Elon Musk to venture capitalist Peter Thiel; and up and coming entrepreneur Sam Altman, who became the CEO of OpenAI in 2019. That was the year that Microsoft made its first $1billion investment in the company just as Bing was bottoming as a search engine and was the constant butt of Google jokes. As Microsoft invests a further $10billion in Open AI, Google is in the process of hitting back with its Google Bard AI research assistant, but this got off to a pretty expensive false start when it wiped more than $120billion off Google parent company Alphabet’s market value by giving a misleading answer to a question about a NASA telescope.

Far be it from IBA to weigh-in on a brewing battle royal that’s way above most people’s pay grade. But there may well be a battle coming closer to home – as the ownership and use of ChatGPT’s content output comes under the legal lens.

The troops are assembling and IP policy is caught flat footed

IP policy has been caught off guard, so fast has this huge step-change in technology capability been.  And we aren’t just talking education institutions struggling to manage plagiarism in student submitted work. The U.S. Copyright Office has gone on record to state its key focus over the next year will be addressing legal gray areas that surround copyright protections and artificial intelligence.

And this begins at the input stage, not the output! ChatGPT and similar software uses existing text, images and code to create “new” work – and the technology has to get its ideas from somewhere. That means trawling the web to “train” and “learn” from existing content. No surprise there are already lawsuits being filed against OpenAI and similar companies, which argue that AI engines are illegally using other people’s work to build their platforms and products in the first place.

Let’s assume that this is clarified from a legal perspective. Then we come to the output stage. The question is, does the copyright lie with the creator of the AI technology, or the company or person who used the tool to create the “new” content? And this is all before any resulting content is even pitched to the media.

Ethics of AI-based content – so who does own the copyright?

We already have IP and even ethical issues around content-based PR campaigns. Generally, the copyright of a piece of content belongs to the company that created the material, for example the copyright for bylined article from a key subject matter expert is held by the company they work represent. But the IBA team is no stranger to the already existing quirks of copyright law once a piece has been submitted to a publication – something that can change depending on the region of a media outlet or even their own editorial policies and will be clearly outlined in any author agreements signed.

But what about AI-driven content? Thankfully the IBA team hasn’t been overrun by the infinite digital monkeys, bashing out the works of Shakespeare, and we’re always clear on copyright issues with clients, the ethics of play for play and with magazine editorial copyright policies. But it is something that is definitely up for debate in the PR world – which is why it was interesting to see the PR Council (PRC) questioned on exactly this topic recent by PR News

PR Council weighs in

The PRC position is very much aligned with the existing understanding of PR copyright. With recent quotes as follows from Kim Sample, PRC president, and incoming board chair Ellen Ryan Mardiks:

Sample: We’ve always focused on ethics and standards. We just did a session on ChatGPT, which was highly attended…and we’re following up with a session with legal counsel.

And we’re going to be talking with the board about when do we issue guidance and standards on using generative AI.

We’re saying to members, “Play with it. [AI is] a great thing if it gets rid of some of the mundane tasks.” But [eventually] we will be at a place where we’re going to require disclosure.

Ryan Mardiks: You know about paid spokespeople needing to disclose. All that is transparency. It’s not the exact same thing [as AI disclosure], but it’s adjacent…this is content meant to communicate not just facts and figures, but ideas and messaging and values…so, we’re going to have to be thoughtful about it…we will need to have industry standards.

But we need to lead versus the other way around. So…deploy it, smartly and thoughtfully and hopefully…we don’t lose the intellectual art that needs to be deployed in writing. If we lose that, all it becomes is prompts. We will have lost something important for consumers.

Questions need to be answered

Generative AI is throwing up some very intriguing IP questions both in legal circles and the PR and marketing sector – both in terms of how it learns and how its output is used. For now, it seems best to err on the side of caution for use in outward-facing content.

In the meantime, perhaps PR and marketing professionals should heed the advice of our last ChatGPT blog on what it can’t do. Use it as an opportunity to sharpen their skills ls that AI and bots lack: imagination & creativity, strategic & critical thinking, and emotional intelligence.

And also get out of the rut of writing marketing speak and start to write with style and flair and not like a Google Translate of a good idea!

Jamie Kightley is Head of Client Services at IBA International

Leave a comment