Recently, have you ever felt like "only weird answers come back..." when talking to AI? 🤔
Actually, this might be because your "prompt" (questions or instructions) isn't being communicated well. Reading the "Caveat promptor" article on the blog 'Surfing Complexity' made things click for me 💭
Who's Responsible for Prompts?
You often hear the phrase "Caveat emptor (buyer beware)," but there's a play on this: "Caveat promptor (prompter beware)." In other words, if the person asking doesn't communicate well, correct answers won't come back ✨
I also often ask AI too vaguely and end up like "Huh? That's not what I meant!" 😳
How to Make Good Prompts?
What impressed me about this article was "prompt design changes with just a little awareness of hints." Specifically...
- Ask specifically
- Properly convey necessary information
- Avoid vague expressions
Just these things can apparently make AI responses much better 👍
For example, rather than asking "Recommend movies?" asking "Recently released emotional movies recommended for people in their 30s?" will get you much more useful answers 🌸
AI Isn't the Same as Humans
It's obvious, but AI can't deeply read "word meanings." So, when questions are vague, things often spin out of control 💡
For people like me who speak a bit casually, this might be a particularly important point to watch 🥺
When things don't work well, it might be important to think "maybe my question was bad?" ✨
Like that, I realized conversation with AI is also mutual communication 💭













