Skip to content

ICONgroupCWC/LLMServer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 

Repository files navigation

This repository holds code for LLM server for automated federated learning for user prompts. To run the LLM server install below dependencies.

  1. llama_cpp
  2. flask
  3. openai

To run the server run following command python inference_server.py

The LLM server here is for supporting requests from the FL server implemented here. The standalone code for running NAS/HPO can be found the this repo.

About

LLM server for automated federated learning backend service

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages