Replies: 3 comments 4 replies
-
I've got a similar issue, but with Pidgin. How do I get the client to use the shadows chat API for content? |
Beta Was this translation helpful? Give feedback.
-
Also interested in this |
Beta Was this translation helpful? Give feedback.
-
This is still under development (sorry, I am swamped with other projects currently) — you'd have to script it yourself for anything outside some of the basic use cases that exist (chat and social media posting) done from animation jobs on the API server. Basically, the thinking is that this sort of thing is done at the API server level and then sent down to the clients to execute, in order to centralize calls to the LLM, but also to reduce overhead for things where the agent has to know about other agents and what they know. So you won't see any config for Shadows or an LLM at the client level. Another primary driver for this is we didn't want clients making a bunch of requests off to an array of services, because in an exercise we may want to hide or whitelist these. With everything from the client going through the API, that is much easier to manage. I haven't used Pidgin much, but I assume it has the same concerns as doing chat over MatterMost. Web content really is early, but its for things like a client requesting a page that we dynamically generate on the fly. Web scraping for content to move in a gapped range is hard, this was some early work to see if we could skip that and dynamically load/create it on the fly. |
Beta Was this translation helpful? Give feedback.
-
I am able to setup ghosts shadows and can successfully login using the UI on port 7860. I have ran a few queries to make sure the LLM is working correctly, but how do we make ghost clients use it? I looked in application.json and there is no config place for it. Mainly I am wanting to use the web_content and excel_content models.
Beta Was this translation helpful? Give feedback.
All reactions