ChatGPT
chat model 使用与注意事项
#TL;DR(中文)
- 是面向对话的code
ChatGPT形态,优势在于多轮对话、指令跟随,以及较强的通用写作/分析/代码辅助能力。codeLLM - 落地关键不是“写更长的 prompt”,而是:明确角色与任务边界、固定输出格式(schema)、并为不确定性设置 abstain 规则(例如 “Unsure about answer”)。
- 生产建议:把高风险任务(事实性、医疗/法律、外部写入)接 /tools,并用code
RAG做回归。codeevaluation
#How to Prompt(中文,code block 保持英文)
你可以把对话 prompts 固定成一个可复用模板,然后按任务填变量:
textSystem: You are <role>. You must follow the rules. Rules: - Follow the requested output format exactly. - If you are unsure, say "Unsure" and list missing information. - Do not fabricate facts or citations. User: <task + context + constraints + examples>
#Self-check rubric(中文)
- 是否遵守输出格式(无额外解释/无跑题)?
- 是否在缺信息时主动问 clarifying questions 或输出 “Unsure”?
- 是否出现 hallucination(编造来源/数字/结论)?
- 多轮对话中是否保持一致性(不自相矛盾)?
在本节中,我们将介绍 ChatGPT 的最新提示工程技术,包括提示、应用、限制、论文和其他阅读材料。 主题:
#ChatGPT 简介
ChatGPT 是由 OpenAI 训练的一种新模型,具有交互式对话的能力。该模型经过训练,可以按照提示中的指示,在对话的上下文中提供适当的响应。ChatGPT 可以帮助回答问题、建议食谱、以特定风格写歌词、生成代码等等。
ChatGPT 使用人类反馈的强化学习(RLHF)进行训练。虽然这个模型比以前的 GPT 迭代要强大得多(并且还经过了训练以减少有害和不真实的输出),但它仍然有一些限制。让我们通过具体的例子来了解一些能力和限制。
您可以在这里使用 ChatGPT 的研究预览,但在下面的示例中,我们将使用 OpenAI Playground 上的“Chat”模式。
#审查对话任务
在之前的指南中,我们简要介绍了对话能力和角色提示。我们介绍了如何指示 LLM 以特定的风格、特定的意图、行为和身份进行对话。
让我们回顾一下我们之前的基本示例,其中我们创建了一个对话系统,该系统能够对问题生成更多的技术和科学响应。
提示:
codeThe following is a conversation with an AI research assistant. The assistant's tone is professional and scientific. Human: Hello, who are you? AI: Hello! I am an AI research assistant. How can I help you today? Human: Can you tell me about the formation of black holes? AI:
从上面的例子中,您可以看到两个重要的组成部分:
- 意图或聊天机器人的解释
- 身份,指示聊天机器人将使用哪种风格或语气进行响应
上面的简单示例适用于使用
text-davinci-003gpt-3.5-turbo像 Snap Inc.和 Instacart 这样的大公司已经在其产品中集成了由 ChatGPT 提供支持的对话功能,这些产品从个性化推荐到开放式购物目标不等。
#与 ChatGPT 的对话
#多轮对话
为了开始展示 ChatGPT 的能力,我们将使用上面的聊天机器人助手示例并讨论结果。与
text-davinci-003gpt-3.5-turbo输入:
codeSystem: You are an AI research assistant. You use a technical and scientific tone. User: Hello, who are you? Assistant: Greetings! I am an AI research assistant. How can I help you today? User: Can you tell me about the formation of black holes?
输出:
codeAssistant: Black holes can form when a massive star exhausts its fuel and collapses under its own gravity. The collapse creates an extremely dense object with a gravitational field so strong that not even light can escape. The boundary beyond which escape is impossible is called the event horizon...
请注意,在上面的示例中,我简化了输入和输出,但 ChatGPT 聊天完成 API 要求消息以特定格式呈现。我在下面添加了一个快照,显示了在 OpenAI Playground 中使用“Chat Mode”时此示例的外观:

我们示例的更正式 API 调用将类似于下面的示例:
pythonopenai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are an AI research assistant. You use a tone that is technical and scientific."}, {"role": "user", "content": "Hello, who are you?"}, {"role": "assistant", "content": "Greeting! I am an AI research assistant. How can I help you today?"}, {"role": "user", "content": "Can you tell me about the creation of black holes?"} ] )
事实上,未来开发人员与 ChatGPT 的交互方式预计将通过聊天标记语言(简称 ChatML)完成。
#单轮任务
聊天格式支持多轮对话,但它也支持类似于我们使用
text-davinci-003输入:
codeUser: Answer the question based on the context below. Keep the answer short and concise. Respond \"Unsure about answer\" if not sure about the answer. Context: Teplizumab traces its roots to a New Jersey drug company called Ortho Pharmaceutical. There, scientists generated an early version of the antibody, dubbed OKT3. Originally sourced from mice, the molecule was able to bind to the surface of T cells and limit their cell-killing potential. In 1986, it was approved to help prevent organ rejection after kidney transplants, making it the first therapeutic antibody allowed for human use. Question: What was OKT3 originally sourced from? Answer:
输出:
codeAssistant: Mice.
请记住,我添加了“用户”和“助手”标签以更好地演示如何使用 ChatGPT 执行任务。以下是在 Playground 中使用的示例:

更正式地说,这是 API 调用(我仅包括请求的消息组件):
pythonCONTENT = """Answer the question based on the context below. Keep the answer short and concise. Respond \"Unsure about answer\" if not sure about the answer. Context: Teplizumab traces its roots to a New Jersey drug company called Ortho Pharmaceutical. There, scientists generated an early version of the antibody, dubbed OKT3. Originally sourced from mice, the molecule was able to bind to the surface of T cells and limit their cell-killing potential. In 1986, it was approved to help prevent organ rejection after kidney transplants, making it the first therapeutic antibody allowed for human use. Question: What was OKT3 originally sourced from? Answer: """ response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "user", "content": CONTENT}, ], temperature=0, )
#指导聊天模型
根据官方 OpenAI 文档,
gpt-3.5-turbogpt-3.5-turbo-0301对于
gpt-3.5-turbo-0301#引文
- ChatGPT and a New Academic Reality: AI-Written Research Papers and the Ethics of the Large Language Models in Scholarly Publishing (March 2023)
- Are LLMs the Master of All Trades? : Exploring Domain-Agnostic Reasoning Skills of LLMs (March 2023)
- Is ChatGPT A Good Keyphrase Generator? A Preliminary Study (March 2023)
- MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action (March 2023)
- Large Language Models Can Be Used to Estimate the Ideologies of Politicians in a Zero-Shot Learning Setting (March 2023)
- Chinese Intermediate English Learners outdid ChatGPT in deep cohesion: Evidence from English narrative writing (March 2023)
- A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models (March 2023)
- ChatGPT as the Transportation Equity Information Source for Scientific Writing (March 2023)
- Translating Radiology Reports into Plain Language using ChatGPT and GPT-4 with Prompt Learning: Promising Results, Limitations, and Potential (March 2023)
- ChatGPT Participates in a Computer Science Exam (March 2023)
- Consistency Analysis of ChatGPT (Mar 2023)
- Algorithmic Ghost in the Research Shell: Large Language Models and Academic Knowledge Creation in Management Research (Mar 2023)
- Large Language Models in the Workplace: A Case Study on Prompt Engineering for Job Type Classification (March 2023)
- Seeing ChatGPT Through Students' Eyes: An Analysis of TikTok Data (March 2023)
- Extracting Accurate Materials Data from Research Papers with Conversational Language Models and Prompt Engineering -- Example of ChatGPT (Mar 2023)
- ChatGPT is on the horizon: Could a large language model be all we need for Intelligent Transportation? (Mar 2023)
- Making a Computational Attorney (Mar 2023)
- Does Synthetic Data Generation of LLMs Help Clinical Text Mining? (Mar 2023)
- MenuCraft: Interactive Menu System Design with Large Language Models (Mar 2023)
- A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT (Mar 2023)
- Exploring the Feasibility of ChatGPT for Event Extraction
- ChatGPT: Beginning of an End of Manual Annotation? Use Case of Automatic Genre Identification (Mar 2023)
- Is ChatGPT a Good NLG Evaluator? A Preliminary Study (Mar 2023)
- Will Affective Computing Emerge from Foundation Models and General AI? A First Evaluation on ChatGPT (Mar 2023)
- UZH_CLyp at SemEval-2023 Task 9: Head-First Fine-Tuning and ChatGPT Data Generation for Cross-Lingual Learning in Tweet Intimacy Prediction (Mar 2023)
- How to format inputs to ChatGPT models (Mar 2023)
- Can ChatGPT Assess Human Personalities? A General Evaluation Framework (Mar 2023)
- Cross-Lingual Summarization via ChatGPT (Feb 2023)
- ChatAug: Leveraging ChatGPT for Text Data Augmentation (Feb 2023)
- Dr ChatGPT, tell me what I want to hear: How prompt knowledge impacts health answer correctness (Feb 2023)
- An Independent Evaluation of ChatGPT on Mathematical Word Problems (MWP) (Feb 2023)
- ChatGPT: A Meta-Analysis after 2.5 Months (Feb 2023)
- Let's have a chat! A Conversation with ChatGPT: Technology, Applications, and Limitations (Feb 2023)
- Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback (Feb 2023)
- On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective (Feb 2023)
- How Generative AI models such as ChatGPT can be (Mis)Used in SPC Practice, Education, and Research? An Exploratory Study (Feb 2023)
- Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT (Feb 2023)
- A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT (Feb 2023)
- Zero-Shot Information Extraction via Chatting with ChatGPT (Feb 2023)
- ChatGPT: Jack of all trades, master of none (Feb 2023)
- A Pilot Evaluation of ChatGPT and DALL-E 2 on Decision Making and Spatial Reasoning (Feb 2023)
- Netizens, Academicians, and Information Professionals' Opinions About AI With Special Reference To ChatGPT (Feb 2023)
- Linguistic ambiguity analysis in ChatGPT (Feb 2023)
- ChatGPT versus Traditional Question Answering for Knowledge Graphs: Current Status and Future Directions Towards Knowledge Graph Chatbots (Feb 2023)
- What ChatGPT and generative AI mean for science (Feb 2023)
- Applying BERT and ChatGPT for Sentiment Analysis of Lyme Disease in Scientific Literature (Feb 2023)
- Exploring AI Ethics of ChatGPT: A Diagnostic Analysis (Jan 2023)
- ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education (Jan 2023)
- The political ideology of conversational AI: Converging evidence on ChatGPT's pro-environmental, left-libertarian orientation (Jan 2023)
- Techniques to improve reliability - OpenAI Cookbook
- Awesome ChatGPT Prompts
- Introducing ChatGPT (Nov 2022)