This repository holds code for LLM server for automated federated learning for user prompts. To run the LLM server install below dependencies.
To run the server run following command python inference_server.py
The LLM server here is for supporting requests from the FL server implemented here. The standalone code for running NAS/HPO can be found the this repo.