ai on a computer
Consulting, Libraries, Marketing

A Discussion of AI Ethics

Today, I spent some time updating my eligibility for the renewal of my Accreditation in Public Relations. A key component of the accreditation is ethics, and to fulfill my ethics requirement, I needed to complete some professional development in this area.

I came across two excellent white papers from the Public Relations Society of America (PRSA): AI PROMPTING 101: A “Start Here” Guide for Professional Communicators and The Ethical Use of AI for Public Relations Practitioners.

After reading both papers, I took the opportunity to reflect on my own practices as someone who regularly integrates AI tools into research, strategy development, and teaching. Like it or not, AI is everywhere – from our Microsoft products, to Google Docs, to “AI Mode” search engine results, to Canva filters. So how can we use it ethically and in line with our professional standards?

Human Judgment Comes First

One of the strongest themes throughout The Ethical Use of AI for Public Relations Practitioners is that AI is a tool—powerful, efficient, and transformative—but it is not a peer. PRSA is unequivocal: humans remain fully accountable for accuracy, truthfulness, fairness, and integrity. AI can help us ideate, draft, analyze, or visualize, but it cannot replace professional judgment.

Transparency Is No Longer Optional

PRSA’s updated guidance underscores the growing expectation for communicators to disclose when AI meaningfully shapes public-facing content. That doesn’t mean we need to disclose every draft or brainstorming session—but when AI contributes substantially to an output, transparency builds trust. Here is an example of a disclosure statement from The Ethical Use of AI for Public Relations Practitioners: “Portions of this document were developed using generative AI tools to support research, ideation, and editing. All content was reviewed and finalized by the editors to ensure ethical alignment and professional accuracy.”

AI Prompting Is a Professional Skill

The second white paper, AI Prompting 101, reinforces a simple truth: If you are going to use AI, its value is directly tied to the quality of the instructions you give it. The guide provides a helpful “SPOCK” framework—specificity, persona, output, context, and knowledge—that mirrors many of the prompting skills I’ve developed over the past two years while developing my own AI literacy.

AI Is Not Representative of All Viewpoints

In The Ethical Use of AI for Public Relations Practitioners, they rightly point out that AI is not representative and/or inclusive by design. The authors add, “this creates ethical concerns because uncritical use of AI outputs can unintentionally reinforce bias, ignore important voices, or misinterpret audiences.” Further, “Practitioners need to remain aware of these risks [and] make deliberate efforts to mitigate them…”

AI Has Other Ethical Considerations

No responsible article about the use of AI can overlook the environmental impacts of AI. As I said before, AI is built into many of the tools we use every day, so having a “zero AI footprint” would be quite challenging – especially if you work in marketing and communicaitons. I am not a climate expert, and neither is PRSA, but I’ll share our perspectives.

For myself, I think we need to set our personal threshold for environmental harm, just like we do when we order something online, fertilize our lawn, or eat takeout from styrofoam containers. Ideally, what we need is change at a systems level – better laws, regulations, and structures that reduce environmental harms and prioritize the health of our ecosystems. Until then, it is very much an exercise in personal awareness and choices. For me, I try to steer away from using generative AI to produce images because it uses signifigantly more water (think 5-15 liters for an image vs. 1/2 liter for text chats).

PRSA also had guidance on this issue in The Ethical Use of AI for Public Relations Practitioners. “With this increased demand for computing power, electricity to operate the systems, and water to cool the hardware, there are environmental and ethical implications. Practitioners must understand the environmental impacts of mining and fabrication, as well as the processes of training new models, running queries, and building new data centers…”

What This Means for Our Work

Reading these white papers and revisiting my own thinking on the topic reinforced my belief that AI, when used responsibly, can boost creativity, support accessibility, streamline workflows, and help communicators focus more on substance than on administrative tasks. But it also reminds me that our roles as communicators—especially in libraries, nonprofits, and public agencies—is to safeguard trust.

That means:

  • Being transparent about how and when AI is used.
  • Keeping humans in the loop for all final decision-making.
  • Maintaining accuracy, fairness, and inclusivity.
  • Protecting confidential or proprietary information.
  • Approaching AI as a tool for amplification, not automation.

As we continue to try to understand AI and what it means for us as professionals, as well as humans, I’ll close with the often-quoted theme of the 2025 Ted Conference. “AI is gaining power at an astonishing pace, prompting a question that’s both alarming and illuminating: what are humans for?” I don’t have the answer, but I appreciate that leaders like PRSA are helping us to understand how this uniquitous technology can ethically fit into our profession.

Leave a comment