Skip to Main Content

Acronyms for Using A.I. Tools: Home

#AI #ArtificialIntelligence #chatbots #ChatGPT #InformationLiteracy #ResearchTools #AIChatbots #ArtificialIntelligenceChatbots #GenerativeAI

 

Whether we realize it or not, artificial intelligence has become part of our everyday lives. Tools such as ChatGPT, Google Gemini, Claude, Perplexity, and many others, are helping us save time, solve problems, and enhance our creativity. As with any other tool, the more skilled we are at using A.I. effectively, the better our results will be. The acronyms below will help you craft more useful prompts and choose the A.I. tools best suited to your needs.

Remember! It's essential to verify and think critically about A.I. generated results. For help with this, see Acronyms for Evaluating Information.

For a wealth of additional information on artificial intelligence, visit Keeping Up With A.I.: Assorted Resources for College Educators and A.I. for Students.

Click each acronym to see a more detailed explanation.

CLEAR

  • Concise
  • Logical
  • Explicit
  • Adaptive
  • Reflective
Professor Leo S. Lo explains his CLEAR technique for prompt engineering in this detailed video: CLEARer Dialogues with AI: Unpacking Prompt Engineering for Librarians. For a handy guide to the CLEAR acronym, see Georgetown University's How To Craft Prompts

TAP

  • Topic
  • Action
  • Parameters

TASTE

  • Test
  • Adjust
  • Simplify
  • Trust
  • Examine
Professor Adrian J. Wallbank recommends TAP & TASTE, a two step approach to prompt engineering for ChatGPT. Step one, TAP, guides your input to ensure an effective prompt. Step two, TASTE, helps you to evaluate the output and fine-tune your results.

COSTAR

  • Context
  • Objective
  • Style
  • Tone
  • Audience
  • Response
Data scientist Sheila Teo won Singapore's GPT-4 Prompt Engineering competition with her COSTAR acronym. For a simplified explanation, see Unlocking the Power of COSTAR Prompt Engineering

ROBOT

  • Reliability
  • Objective
  • Bias
  • Ownership
  • Type

POTATO

  • Purpose
  • Ownership
  • Transparency
  • Accuracy
  • Transhumanism
  • Operability

Use the ROBOT test to evaluate information (such as news articles) about artificial intelligence tools. Use the POTATO test to evaluate tools that use generative AI.

RAG

  • Retrieval
  • Augmented
  • Generation
RAG is a framework that enhances the quality of AI-generated outputs by incorporating factual information from external sources rather than relying solely on pre-trained internal knowledge. For this reason, a tool that employs RAG may be a better choice when using AI for research purposes. To learn more, see Considering RAG When Evaluating Generative AI Tools

NEED HELP?

Call or visit your campus librarians or connect to the statewide Ask-a-Librarian service via live chat, email, or text.

Chat | Email | Text


This page was created and is maintained by
Jenny Saxton.
Questions and comments are welcome.