Home Tech Generative AI is my research and writing partner. Should I reveal it?

Generative AI is my research and writing partner. Should I reveal it?

0 comments
Generative AI is my research and writing partner. Should I reveal it?

If I use an AI tool for research or to help me create something, should I cite it in my full work as a source? How do you properly attribute AI tools when you use them?

—Dating Finder

Dear quote,

The simple answer is that if you use generative AI for research purposes, disclosure is probably not necessary. However, attribution is probably necessary if you use ChatGPT or another AI tool for composition.

Whenever you feel ethically conflicted about disclosing your commitment to AI software, here are two guiding questions I think you should ask yourself: Did I use AI for research or composition? And might the recipient of this AI-assisted composition feel cheated if the tools were revealed to be synthetic rather than organic? Sure, these questions may not fit perfectly in every situation, and academics are definitely held to a higher standard when it comes to proper citation; However, I firmly believe that taking five minutes to reflect can help you understand appropriate usage and avoid unnecessary headaches.

Distinguishing between research and composition is a crucial first step. If I’m using generative AI as a kind of unreliable encyclopedia that can point me to other sources or broaden my perspective on a topic, but not as part of actual writing, I think that’s less problematic and unlikely to leave the stench of deception. . Always double-check any facts you find in chatbot results and never reference a ChatGPT result or a Perplexity page as your primary source of truth. Most chatbots can now link to external sources on the web, so you can click to read more. Think of it, in this context, as part of the information infrastructure. ChatGPT can be the road you’re driving on, but the final destination must be some external link.

Let’s say you decide to use a chatbot to outline a first draft, or ask it to create writing/images/audio/video to match yours. In this case, I think erring on the side of disclosure is smart. Even the Dominos cheese sticks on the Uber Eats app now include a disclaimer that the food description was generated by AI and may include inaccurate ingredients.

Whenever you use AI for creation and, in some cases, research, you should focus on the second question. Basically, ask yourself if the reader or viewer would feel cheated to learn later that parts of what they experienced were generated by AI. If so, you should use appropriate attribution explaining how you used the tool, out of respect for your audience. Generating parts of this column without disclosure would not only be against WIRED policy, but it would also be a dry and unfun experience for both of us.

By first considering the people who will enjoy your work and your intentions in creating it, you can add context to your use of AI. That context is useful to overcome difficult situations. In most cases, a work email generated by AI and reviewed by you is probably fine. Still, using generative AI to compose a condolence email after a death would be an example of insensitivity, and something that has happened it really happened. If a human on the other end of the communication is looking to connect with you on a personal and emotional level, consider closing the ChatGPT browser tab and pulling out a notepad and pen.


“How can educators teach teenagers to use AI tools responsibly and ethically? Do the advantages of AI outweigh the threats?

—Raised hand

You may also like