1.背景
当前OpenAI提供了开放接口,支持通过api的方式调用LLM进行文本推理、图片生成等能力,但目前官方只提供了Python SDK。为了后续更方便集成和应用,可以采用Golang对核心推理调用接口进行封装,提供模型调用能力。
2.相关准备
官方OpenAPI文档:https://platform.openai.com/docs/overview
- 首先需要注册OpenAI账号,并且创建OpenAPI Key:https://platform.openai.com/api-keys,账号内需要充值5美元用于API调用计费。充值需要有美联储值卡,可以选择找代充,也可以直接买已有的账号,链接:https://eylink.cn/
- OpenAPI官方调用域名为:https://api.openai.com,国内需要开启全局科学上网才可调用,可以用代理域名:https://api.openai-proxy.com
- OpenAPI调用模型计费规则以消耗的Tokens计费:https://openai.com/api/pricing/
- gpt-3.5-turbo:2美元/百万Tokens
- gpt-4-turbo:40美元/百万Tokens
- gpt-4o:20美元/百万Tokens
3.实现代码
代码地址已上传:https://github.com/pbrong/llm_hub/blob/master/pkg/llm_caller/gpt_caller.go
- helper.go
package llm_caller import "context" var ( _ LLMCaller = &gptLLMCaller{} ) type LLMCaller interface { Call(ctx context.Context, userPrompt string) (completions string, err error) } type Message struct { Role string `json:"role"` Content string `json:"content"` } type GptCompletion struct { Created int `json:"created"` Usage struct { CompletionTokens int `json:"completion_tokens"` PromptTokens int `json:"prompt_tokens"` TotalTokens int `json:"total_tokens"` } `json:"usage"` Model string `json:"model"` ID string `json:"id"` Choices []struct { FinishReason string `json:"finish_reason"` Index int `json:"index"` Message struct { Role string `json:"role"` Content string `json:"content"` } `json:"message"` } `json:"choices"` SystemFingerprint interface{} `json:"system_fingerprint"` Object string `json:"object"` }
- gpt_caller.go
package llm_caller import ( "context" "encoding/json" "fmt" "llm_hub/conf" "llm_hub/pkg/http" ) const ( // 请求路径 CompletionsURL = "/v1/chat/completions" // 可用模型 Gpt35TurboModel = "gpt-3.5-turbo" ) type gptLLMCaller struct { openAiKey string systemText string temperature float64 maxTokens int64 } func NewGptLLMCaller(ctx context.Context, systemText string, temperature float64, maxTokens int64) (*gptLLMCaller, error) { return &gptLLMCaller{ openAiKey: conf.LLMHubConfig.Openai.Key, systemText: systemText, temperature: temperature, maxTokens: maxTokens, }, nil } func (caller *gptLLMCaller) Call(ctx context.Context, userPrompt string) (completion string, err error) { reqURL := conf.LLMHubConfig.Openai.Host + CompletionsURL body := map[string]interface{}{ "model": Gpt35TurboModel, "temperature": caller.temperature, "stream": false, "max_tokens": caller.maxTokens, "messages": nil, } body["messages"] = buildPromptMessages(caller.systemText, userPrompt) headers := buildAuthHeaders(caller.openAiKey) resp, err := http.PostWithHeader(reqURL, body, headers) if err != nil { return "", fmt.Errorf("GPT调用失败, err = %v", err) } var gptCompletion GptCompletion _ = json.Unmarshal(resp, &gptCompletion) if len(gptCompletion.Choices) > 0 { completion = gptCompletion.Choices[0].Message.Content } return completion, nil } func buildAuthHeaders(key string) map[string]string { headers := map[string]string{ "Authorization": "Bearer " + key, } return headers } func buildPromptMessages(system string, user string) []*Message { var messages []*Message messages = append(messages, &Message{ Role: "system", Content: system, }) messages = append(messages, &Message{ Role: "user", Content: user, }) return messages }
- 测试:gpt_caller_test.go
package llm_caller import ( "context" "github.com/stretchr/testify/assert" "llm_hub/conf" "testing" ) func Test_gptLLMCaller_Call(t *testing.T) { conf.Init() ctx := context.Background() caller, err := NewGptLLMCaller(ctx, "hello gpt-3.5-turbo", 0.5, 128) assert.Nil(t, err) completion, err := caller.Call(ctx, "hello world") assert.Nil(t, err) t.Logf("gpt call success, completion = %v", completion) }
成功实现gpt-3.5.turbo模型调用:
2024/05/26 22:17:11 post with header, url = https://api.openai-proxy.com/v1/chat/completions, request = { "max_tokens": 128, "messages": [ { "role": "system", "content": "hello gpt-3.5-turbo" }, { "role": "user", "content": "hello world" } ], "model": "gpt-3.5-turbo", "stream": false, "temperature": 0.5 }, response = { "id": "chatcmpl-9T8yMDeJDEHKyN70Skk3prHvHZBkz", "object": "chat.completion", "created": 1716733030, "model": "gpt-3.5-turbo-0125", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Hello! How can I assist you today?" }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 23, "completion_tokens": 9, "total_tokens": 32 }, "system_fingerprint": null } gpt_caller_test.go:17: gpt call success, completion = Hello! How can I assist you today? --- PASS: Test_gptLLMCaller_Call (1.31s) PASS
- 测试:gpt_caller_test.go
- gpt_caller.go
还没有评论,来说两句吧...