Hi @xiaren-qincai
Thank you for the great work — this is very helpful. I have a couple of questions:
- System prompt insertion: Which model interface should be used to inject the system prompt? I’m currently doing something like the following. Is this the recommended approach for optimal model performance?
from swift.llm import InferRequest return InferRequest( messages=[ {"role": "system", "content": SYSTEM_PROMPT}, {"role": "user", "content": content} ], images=images or None, videos=videos or None, )
- I’ve noticed a significant drop in model performance when I ask for responses in a JSON format. In particular, some classifications flip from Real to AI-generated for the same examples. Could you share a recommended or optimal prompt pattern for returning results in a structured format? Also, is there a known reason why enforcing JSON output might affect the model’s decision quality?
Hi @xiaren-qincai
Thank you for the great work — this is very helpful. I have a couple of questions:
from swift.llm import InferRequest return InferRequest( messages=[ {"role": "system", "content": SYSTEM_PROMPT}, {"role": "user", "content": content} ], images=images or None, videos=videos or None, )