在不同环境中调用api

1.创建api key

  1. 创建api key
    在gropapi.com注册账号,登录后创建一个新的api,填写相关信息后保存,系统会生成一个唯一的api key。

  2. 保存api key
    gropapikey1:gsk_开头的一串字符串

    1
    gsk_开头的一串字符串

2. crul调用api

使用curl发送请求查看所有可用模型

  1. 打开终端环境
  2. 运行curl带参数
    1
    2
    3
    curl -X GET "https://api.groq.com/openai/v1/models" \
    -H "Authorization: Bearer gsk_开头的一串字符串" \
    -H "Content-Type: application/json"
    3.查看结果

curl调用免费模型

  1. 打开终端环境
  2. 输入命令
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
       
    curl https://api.groq.com/openai/v1/chat/completions \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer gsk_开头的一串字符串" \
    -d '{
    "model": "llama-3.3-70b-versatile",
    "messages": [{
    "role": "user",
    "content": "Explain the importance of fast language models"
    }]
    }'

3.javascript调用api

在浏览器控制台中调用api key进行单次对话

  1. 打开浏览器。

  2. 打开浏览器console

  3. 输入以下js代码

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    fetch("https://api.groq.com/openai/v1/chat/completions", {
    method: "POST",
    headers: {
    "Content-Type": "application/json",
    "Authorization": "Bearer gsk_开头的一串字符串"
    },
    body: JSON.stringify({
    model: "llama-3.3-70b-versatile",
    messages: [
    { role: "user", content: "hi" }
    ]
    })
    })
    .then(r => r.json())
    .then(data => {
    console.log("AI:", data.choices[0].message.content);
    })
    .catch(console.error);

在浏览器控制台中调用api key查看所有json数据

  1. 打开浏览器。

  2. code

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    fetch("https://api.groq.com/openai/v1/chat/completions", {
    method: "POST",
    headers: {
    "Content-Type": "application/json",
    "Authorization": "Bearer gsk_开头的一串字符串"
    },
    body: JSON.stringify({
    model: "llama-3.3-70b-versatile",
    messages: [
    { role: "user", content: "hi" }
    ]
    })
    })
    .then(r => r.json())
    .then(console.log)
    .catch(console.error);

node运行javascript调用api进行流式输出

  1. 创建testapi.js

  2. 代码

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    import OpenAI from "openai";

    const client = new OpenAI({
    apiKey: "gsk_开头的一串字符串",
    baseURL: "https://api.groq.com/openai/v1"
    });

    async function main() {
    const stream = await client.chat.completions.create({
    model: "llama-3.3-70b-versatile",
    stream: true,
    messages: [
    { role: "user", content: "Explain the importance of fast language models" }
    ]
    });

    for await (const chunk of stream) {
    const text = chunk.choices?.[0]?.delta?.content;
    if (text) process.stdout.write(text);
    }
    }

    main();

  3. node运行

    1
    node testapi.js

写一个网页与大模型对话

  1. 创建testchat.html文件
  2. 编写代码
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    <!DOCTYPE html>
    <html>
    <body>
    <textarea id="out" style="width:100%;height:200px;"></textarea>
    <input id="msg" /><button onclick="go()">Send</button>

    <script>
    const KEY = "gsk_开头的一串字符串";

    async function go() {
    const msg = document.getElementById("msg").value;
    const out = document.getElementById("out");

    const res = await fetch("https://api.groq.com/openai/v1/chat/completions", {
    method: "POST",
    headers: {
    "Content-Type": "application/json",
    "Authorization": "Bearer " + KEY
    },
    body: JSON.stringify({
    model: "llama-3.3-70b-versatile",
    messages: [
    { role: "user", content: msg }
    ]
    })
    });

    const data = await res.json();
    out.value += "\nAI: " + data.choices[0].message.content;
    }
    </script>
    </body>
    </html>

  3. 打开网页进行对话测试

4. python调用api

在python中调用 api key查看json输出

  1. 安装环境

    1
    pip3 install openai
  2. 编写代码

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    from openai import OpenAI
    import json

    client = OpenAI(
    api_key="gsk_开头的一串字符串",
    base_url="https://api.groq.com/openai/v1"
    )

    resp = client.chat.completions.create(
    model="llama-3.3-70b-versatile",
    messages=[{"role": "user", "content": "hi"}]
    )

    print(json.dumps(resp.model_dump(), indent=2, ensure_ascii=False))

  3. 运行查看结果

在python中调用 api key进行多轮对话

  1. 安装环境

  2. 编写代码

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    from openai import OpenAI

    client = OpenAI(
    api_key="gsk_开头的一串字符串L", # 必须是纯 ASCII
    base_url="https://api.groq.com/openai/v1"
    )

    history = []

    while True:
    user = input("你: ")
    if user.lower() == "exit":
    break

    history.append({"role": "user", "content": user})

    resp = client.chat.completions.create(
    model="llama-3.3-70b-versatile",
    messages=history
    )

    ai = resp.choices[0].message.content # ← 正确写法
    print("AI:", ai)

    history.append({"role": "assistant", "content": ai})
  3. 运行查看结果

5.cpp调用api

cpp在linux中调用api打印返回的json数据

  1. 下载httplib.c文件

    1
    wget https://raw.githubusercontent.com/yhirose/cpp-httplib/master/httplib.h

    或:

    1
    curl -O https://raw.githubusercontent.com/yhirose/cpp-httplib/master/httplib.h
  2. 创建testgroq.cpp

  3. code

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    #define CPPHTTPLIB_OPENSSL_SUPPORT
    #include "httplib.h"
    #include <iostream>

    int main() {
    // HTTPS 客户端
    httplib::SSLClient cli("api.groq.com", 443);

    // 如果你想验证证书,改成 true
    cli.enable_server_certificate_verification(false);

    std::string body = R"({
    "model": "llama-3.3-70b-versatile",
    "messages": [
    { "role": "user", "content": "hi" }
    ]
    })";

    httplib::Headers headers = {
    {"Content-Type", "application/json"},
    {"Authorization", "Bearer gsk_你的真实key"}
    };

    auto res = cli.Post("/openai/v1/chat/completions", headers, body, "application/json");

    if (res) {
    std::cout << "Status: " << res->status << "\n";
    std::cout << "Body:\n" << res->body << "\n";
    } else {
    std::cout << "Request failed\n";
    }

    return 0;
    }

  4. 编译命令(Linux)

    1
    g++ groq.cpp -o groq -lssl -lcrypto

    Linux 默认自带 OpenSSL,所以不需要安装额外库。

cpp在linux中调用api通过流式输出打印消息

  1. 创建teststream.cpp

  2. code

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    #define CPPHTTPLIB_OPENSSL_SUPPORT
    #include "httplib.h"
    #include <iostream>
    #include <sstream>

    int main() {
    // 创建 HTTPS 客户端
    httplib::SSLClient cli("api.groq.com", 443);
    cli.enable_server_certificate_verification(false); // 简化示例

    // 请求头
    httplib::Headers headers = {
    {"Content-Type", "application/json"},
    {"Authorization", "Bearer gsk_开头的一串字符串"}
    };


    // 流式请求体
    std::string body = R"({
    "model": "llama-3.3-70b-versatile",
    "stream": true,
    "messages": [
    { "role": "user", "content": "hello" }
    ]
    })";

    // 发送 POST(流式接收)
    auto res = cli.Post(
    "/openai/v1/chat/completions",
    headers,
    body,
    "application/json",
    [&](const char *data, size_t len) {
    std::string chunk(data, len);

    // SSE 格式:data: {...}\n\n
    std::istringstream ss(chunk);
    std::string line;

    while (std::getline(ss, line)) {
    if (line.rfind("data:", 0) == 0) {
    std::string json = line.substr(5);

    if (json.find("[DONE]") != std::string::npos) {
    std::cout << "\n[流式结束]\n";
    return true;
    }

    // 查找 content 字段
    auto pos = json.find("\"content\":");
    if (pos != std::string::npos) {
    auto start = json.find("\"", pos + 10);
    auto end = json.find("\"", start + 1);
    if (start != std::string::npos && end != std::string::npos) {
    std::string token = json.substr(start + 1, end - start - 1);
    std::cout << token << std::flush;
    }
    }
    }
    }

    return true; // 继续接收
    }
    );

    if (!res) {
    std::cout << "请求失败\n";
    }

    return 0;
    }

  3. 创建httplib_wrapper.cpp文件将httplib写成静态库

    1
    2
    #define CPPHTTPLIB_OPENSSL_SUPPORT
    #include "httplib.h"
  4. 创建makefile文件,解决重复编译httplib耗时问题

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    CXX = g++
    CXXFLAGS = -O2 -Wall -std=c++17
    LDFLAGS = -lssl -lcrypto

    all: stream

    libhttplib.a: httplib_wrapper.o
    ar rcs libhttplib.a httplib_wrapper.o

    httplib_wrapper.o: httplib_wrapper.cpp httplib.h
    $(CXX) $(CXXFLAGS) -c httplib_wrapper.cpp -o httplib_wrapper.o

    stream: stream.o libhttplib.a
    $(CXX) stream.o -L. -lhttplib $(LDFLAGS) -o stream

    stream.o: stream.cpp
    $(CXX) $(CXXFLAGS) -c stream.cpp -o stream.o

    clean:
    rm -f *.o *.a stream

  5. 编译命令

    1
    make
  6. 运行

    1
    ./stream

附录

python脚本检测模型

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
from openai import OpenAI
import time
import json

API_KEY = "gsk_开头的一串字符串"

client = OpenAI(
api_key=API_KEY,
base_url="https://api.groq.com/openai/v1"
)

def supports_chat(model_id):
"""过滤掉不支持 chat 的模型"""
try:
client.chat.completions.create(
model=model_id,
messages=[{"role": "user", "content": "test"}],
max_tokens=1
)
return True
except Exception as e:
err = str(e)
if "does not support chat completions" in err:
return False
if "model_terms_required" in err:
return False
return True # 其他错误不代表不支持 chat

def detect_model(model_id):
"""双调用检测模型是否免费"""
try:
# 第一次调用(warmup)
client.chat.completions.create(
model=model_id,
messages=[{"role": "user", "content": "hi"}],
max_tokens=1
)
time.sleep(0.2)

# 第二次调用(真正判断是否收费)
client.chat.completions.create(
model=model_id,
messages=[{"role": "user", "content": "hi"}],
max_tokens=1
)

return "free"

except Exception as e:
err = str(e)

if "insufficient_quota" in err or "payment" in err or "billing" in err:
return "paid"

if "model_terms_required" in err:
return "terms_required"

if "decommissioned" in err:
return "deprecated"

if "does not support chat completions" in err:
return "not_chat"

return f"error: {err}"

def main():
models = client.models.list().data

print("检测到的模型数量:", len(models))
print("-" * 50)

for m in models:
model_id = m.id

# 先过滤不支持 chat 的模型
if not supports_chat(model_id):
print(f"{model_id}: not_chat")
continue

status = detect_model(model_id)
print(f"{model_id}: {status}")

if __name__ == "__main__":
main()