Use Policy for Artificial Intelligence (AI)

Artificial Intelligence (AI) and its use for content generation is a hot topic. Several organizations for professional writers (AMWA, ISMPP) include guidelines for use of AI during content development for documents.

In the spirit of transparency, here’s my AI use policy.

First, clients can request/dictate whether AI should or should not be used for research of content in their documents.


Second, IF I use AI, I am the Human-In-The-Loop (HIIL). I will choose the AI software, formulate the prompts, assess output, and revise the prompts. In many cases, I would provide a set of published articles that directly pertain to the theme of the current project.

 
Because AI software may use the input sources to train or update the model, I would send only published articles, presentations, conference proceedings, book chapters, and other documents as input sources. I will not send any unpublished information to the AI software.

Credentials for using AI

I have spent time and money to learn the pros and challenges for several AI platforms and several appropriate ways to use one or more of these AI platforms for my genres (mainly white papers, grant writing, Continuing Medical Education, publications).

So far, I would use AI only to provide non-obvious ideation for white papers, and possibly reviews.

To ensure accuracy and originality of the content

I read the full articles and write the content myself. I do not subcontract. I also do not recommend using AI for providing summaries of articles at this time.

Furthermore, according to Neil Patel, human-generated blogs receive more traffic on average than AI-generated blogs.

Surprisingly, despite humans spending much more time writing/editing the content, human-generated articles receive more traffic on a per minute spent writing/editing than AI-generated articles. [https://neilpatel.com/blog/ai-vs-human-content/]

Thus I write the content and lead the reader through a logical flow of ideas (and results) to enhance their engagement and comprehension.

Note that some AI platforms do not provide their sources and may include inaccuracies (nicknamed hallucinations), as acknowledged by their own answers:

The common AI platform, ChatGPT4 and ChatGPT.o, does not provide direct sources for its answers, as described in its own words: “I can't directly access or provide primary sources, but I can guide you on where to find them. When I provide summaries or information, it’s based on the knowledge I've been trained on, which includes a wide range of sources, but I don’t have direct access to specific databases or documents.” Its searches have “higher hallucination rates [than Perplexity.ai] for less common topics.”


The answers to queries are dependent on its sources and are often limited.


Perplexity.ai
“Perplexity.ai exhibits a lower rate of hallucinations compared to models that do not incorporate real-time retrieval. This advantage stems from its ability to access up-to-date information, which helps it handle niche topics effectively. However, the accuracy of its responses can be compromised if the retrieved sources are outdated or irrelevant, leading to potential new types of inaccuracies.”

In Summary

AI can be useful for analyzing large datasets (including customer reviews) and identifying non-obvious ideation. Use of AI needs a knowledgeable Human-in-the-Loop (HIIL).

While I may use AI to reveal non-obvious ideation, I read the full articles and write the content.  

As reminder, clients can request/dictate whether AI should or should not be used for research of content in their documents.