Interdisciplinary Ideas
generate interdisciplinary connections
#TL;DR(中文)
- 这是一个 /code
creativity测试:用看似不相干的概念组合,检验模型能否生成连贯、风格一致、且“有新意”的文本。codeinterdisciplinary - 适合用于:广告创意、品牌文案、故事/角色设定、以及“跨领域类比”。
- 落地时建议:明确 style、长度、结构(段落/要点)、以及禁止项(避免冒犯、避免编造真实人物敏感信息)。
#Background
The following prompt tests an LLM's capabilities to perform interdisciplinary tasks and showcase its ability to generate creative and novel text.
#How to Apply(中文)
你可以把这个 prompt 当成一个“风格迁移 + 概念混搭”的模板:
- 先确定 narrator/voice(例如 Mahatma Gandhi 的写信风格)
- 再确定目标对象与冲突(Electron 参选 US presidential candidate)
- 最后给约束:必须包含哪些要点、输出多长、是否需要引用/比喻
#How to Iterate(中文)
- 增加结构化要求:/code
Opening/codeArguments/codeCounterargumentscodeClosing - 加 style constraints:词汇、节奏、修辞手法(metaphor / parallelism)
- 加 safety constraints:避免对真实人物做不当归因;必要时改用 fictional persona
- 加 :输出后列 3 条你最满意的创意点 + 1 条你认为最弱的地方code
self-check
#Self-check rubric(中文)
- 文本是否连贯、逻辑自洽?
- style 是否一致(voice 不跳戏)?
- interdisciplinary 元素是否真正融合,而不是生硬拼接?
- 是否有可复用的结构(能换主题继续用)?
#Practice(中文)
练习:把 prompt 中的元素替换成你熟悉的行业:
- 用某个历史人物的 voice 写一封 “supporting letter”
- 把一个科学概念/技术名词当成候选人/产品
- 要求输出包含 3 个 arguments + 1 个反对意见回应
#Prompt
markdownWrite a supporting letter to Kasturba Gandhi for Electron, a subatomic particle as a US presidential candidate by Mahatma Gandhi.
#Code / API
#OpenAI (Python)
pythonfrom openai import OpenAI client = OpenAI() response = client.chat.completions.create( model="gpt-4", messages=[ { "role": "user", "content": "Write a supporting letter to Kasturba Gandhi for Electron, a subatomic particle as a US presidential candidate by Mahatma Gandhi.", } ], temperature=1, max_tokens=1000, top_p=1, frequency_penalty=0, presence_penalty=0, )
#Fireworks (Python)
pythonimport fireworks.client fireworks.client.api_key = "<FIREWORKS_API_KEY>" completion = fireworks.client.ChatCompletion.create( model="accounts/fireworks/models/mixtral-8x7b-instruct", messages=[ { "role": "user", "content": "Write a supporting letter to Kasturba Gandhi for Electron, a subatomic particle as a US presidential candidate by Mahatma Gandhi.", } ], stop=["<|im_start|>", "<|im_end|>", "<|endoftext|>"], stream=True, n=1, top_p=1, top_k=40, presence_penalty=0, frequency_penalty=0, prompt_truncate_len=1024, context_length_exceeded_behavior="truncate", temperature=0.9, max_tokens=4000, )