-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
move process_message inside BaseLLM #1021
Conversation
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #1021 +/- ##
==========================================
+ Coverage 81.85% 82.09% +0.24%
==========================================
Files 246 246
Lines 13725 13732 +7
==========================================
+ Hits 11234 11273 +39
+ Misses 2491 2459 -32 ☔ View full report in Codecov by Sentry. |
A good point. Also @better629 please review gemini part of code. |
pre-commit checks failed, suggesting format issues. Please check https://docs.deepwisdom.ai/main/en/guide/contribute/contribute_guide.html#before-submission to format the code. |
@geohotstan So, let do it step-by-step
|
f332531
to
1194976
Compare
Your approach to the modification is correct. class BaseLLM(ABC):
...
def _user_msg(self, msg: str, images: Optional[Union[str, list[str]]] = None) -> dict[str, Union[str, dict]]:
if images:
# as gpt-4v, chat with image
return self._user_msg_with_imgs(msg, images)
else:
return {"role": "user", "content": msg}
def _user_msg_with_imgs(self, msg: str, images: Optional[Union[str, list[str]]]):
"""
images: can be list of http(s) url or base64
"""
if isinstance(images, str):
images = [images]
content = [{"type": "text", "text": msg}]
for image in images:
# image url or image base64
url = image if image.startswith("http") else f"data:image/jpeg;base64,{image}"
# it can with multiple-image inputs
content.append({"type": "image_url", "image_url": url})
return {"role": "user", "content": content}
def _assistant_msg(self, msg: str) -> dict[str, str]:
return {"role": "assistant", "content": msg}
def _system_msg(self, msg: str) -> dict[str, str]:
return {"role": "system", "content": msg}
def format_msg(self, messages: Union[str, Message, list[dict], list[Message], list[str]]) -> list[dict]:
"""convert messages to list[dict]."""
from metagpt.schema import Message
if not isinstance(messages, list):
messages = [messages]
processed_messages = []
for msg in messages:
if isinstance(msg, str):
processed_messages.append({"role": "user", "content": msg})
elif isinstance(msg, dict):
assert set(msg.keys()) == set(["role", "content"])
processed_messages.append(msg)
elif isinstance(msg, Message):
processed_messages.append(msg.to_dict())
else:
raise ValueError(
f"Only support message type are: str, Message, dict, but got {type(messages).__name__}!"
)
return processed_messages
def _system_msgs(self, msgs: list[str]) -> list[dict[str, str]]:
return [self._system_msg(msg) for msg in msgs]
def _default_system_msg(self):
return self._system_msg(self.system_prompt) |
other fix merged. |
Features
process_message
intoBaseLLM
is better sinceprocess_message
is only used in the context of LLM use.Feature Docs
Influence
Result
KeyError: "Could not recognize the intended type of the dict. A Content should have a 'parts' key. A Part should have a 'inline_data' or a 'text' key. A Blob should have 'mime_type' and 'data' keys. Got keys: ['role', 'content']"
Other