Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

我的是新加坡的服务器,现在总是返回请求超时,请稍后再试 #15

Open
Joelsenior opened this issue Aug 27, 2023 · 5 comments

Comments

@Joelsenior
Copy link

No description provided.

@Joelsenior
Copy link
Author

这是结果 请求超时,请稍后再试!
【近期官方接口响应变慢,若持续出现请求超时,还请换个时间再来😅~】
162.62.80.57 - - [27/Aug/2023 19:42:42] "POST /wechat/?signature=8ba7b6399a7913544f0dd3ffee73afd92abc66f3&timestamp=1693136561&nonce=80269014&openid=oP1R8w53Va30bQ61Su_TxTrDB62k HTTP/1.1" 200 -

发送的消息: [{'role': 'system', 'content': '我是ChatGPT, 一个由OpenAI训练的大型语言模型, 我旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。'}, {'role': 'user', 'content': '看看'}]
{'model': 'gpt-3.5-turbo-0301', 'messages': [{'role': 'system', 'content': '我是ChatGPT, 一个由OpenAI训练的大型语言模型, 我旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。'}, {'role': 'user', 'content': '看看'}], 'max_tokens': 80, 'temperature': 0.8, 'stream': True}
beginStream <class 'generator'> 1.5668938159942627

记录时间: 1693136655 当前时间 1693136655
这是结果 请求超时,请稍后再试!
【近期官方接口响应变慢,若持续出现请求超时,还请换个时间再来😅~】
162.62.81.123 - - [27/Aug/2023 19:44:16] "POST /wechat/?signature=7462510d1422ef40bed68b4547773e2d37c7f450&timestamp=1693136655&nonce=210201467&openid=oP1R8w53Va30bQ61Su_TxTrDB62k HTTP/1.1" 200 -

@ToryPan
Copy link
Owner

ToryPan commented Aug 27, 2023 via email

@Joelsenior
Copy link
Author

token设置为123abc

@Joelsenior
Copy link
Author

记录时间: 1693474792 当前时间 1693474792
这是结果 请求超时,请稍后再试!
【近期官方接口响应变慢,若持续出现请求超时,还请换个时间再来😅~】
162.62.81.123 - - [31/Aug/2023 17:39:53] "POST /wechat/?signature=35e985c89fa1167d6bce65c75c401c68eae9759e&timestamp=1693474792&nonce=448127359&openid=o9roj6iNXCnEn0KLwcAbC0ZtjAIk HTTP/1.1" 200 -
180.163.29.212 - - [31/Aug/2023 17:40:03] "GET /wechat HTTP/1.1" 308 -
None
180.163.29.212 - - [31/Aug/2023 17:40:05] "GET /wechat/ HTTP/1.1" 500 -
Traceback (most recent call last):
File "/usr/local/lib64/python3.6/site-packages/flask/app.py", line 2091, in call
return self.wsgi_app(environ, start_response)
File "/usr/local/lib64/python3.6/site-packages/flask/app.py", line 2076, in wsgi_app
response = self.handle_exception(e)
File "/usr/local/lib64/python3.6/site-packages/flask/app.py", line 2073, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib64/python3.6/site-packages/flask/app.py", line 1519, in full_dispatch_request
return self.finalize_request(rv)
File "/usr/local/lib64/python3.6/site-packages/flask/app.py", line 1538, in finalize_request
response = self.make_response(rv)
File "/usr/local/lib64/python3.6/site-packages/flask/app.py", line 1702, in make_response
f"The view function for {request.endpoint!r} did not"
TypeError: The view function for 'wechat' did not return a valid response. The function either returned None or ended without a return statement.
180.163.29.212 - - [31/Aug/2023 17:40:11] "GET /wechat/?debugger=yes&cmd=resource&f=debugger.js HTTP/1.1" 200

@Joelsenior
Copy link
Author

Joelsenior commented Sep 1, 2023

知道原因啦,是因为openai的key被封了,可以淘宝买个key,供大家参考。
我写了一个脚本测试出来的,脚本如下:
import requests

替换为您自己的 ChatGPT API 密钥

api_key = "你的key"

准备请求头,包括 API 密钥

headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}

准备请求数据

data = {
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Say this is a test!"}],
"temperature": 0.7
}

ChatGPT API 的端点 URL

api_url = "https://api.openai.com/v1/chat/completions"

发送 POST 请求

response = requests.post(api_url, headers=headers, json=data)

处理响应

if response.status_code == 200:
result = response.json()
generated_text = result["choices"][0]["message"]["content"]
print("ChatGPT 的回复:", generated_text)
else:
print("请求失败,HTTP 状态码:", response.status_code)
print("错误消息:", response.text)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants