跳到内容


API 参考

CodeProject.AI 的 API 分为图像、视觉、文本和状态等类别,每个类别又进一步细分为子主题。

本文档将不断更改和更新,以反映最新的服务器版本和已安装的分析模块

计算机听觉

声音分类器

根据 UrbanSound8K 数据集对声音文件进行分类。

POST: https://:32168/v1/sound/classify

平台

全部

参数

  • sound (文件):要分析的 HTTP 文件对象(WAV 声音文件)。

  • min_confidence (浮点数):成功分类的最低置信度。范围为 0.0 到 1.0。默认值为 0.4。可选。默认为 0.4

响应

JSON
{
  "success": (Boolean) // True if successful.
  "label": (Text) // The classification label of the sound.
  "confidence": (Float) // The confidence in the classification in the range of 0.0 to 1.0.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('sound', fileChooser.files[0]);
formData.append("min_confidence", 0.4);

var url = 'https://:32168/v1/sound/classify';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("label: " + data.label)
                   console.log("confidence: " + data.confidence.toFixed(2))
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

计算机视觉

车牌识别器

检测并读取图像中检测到的车牌中的字符

POST: https://:32168/v1/vision/alpr

平台

全部,!Windows-Arm64

参数

  • upload (文件):用于 ALPR 的图像。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "message": (String) // A summary of the inference operation.
  "error": (String) // (Optional) An description of the error if success was false.
  "predictions": (Object[]) // An array of objects with the x_max, x_min, max, y_min bounds of the plate, label, the plate chars and confidence.
  "count": (Integer) // The number of objects found.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('upload', fileChooser.files[0]);

var url = 'https://:32168/v1/vision/alpr';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("message: " + data.message)
                   console.log("error: " + data.error)
                   console.log("predictions: " + (nothing returned))
                   console.log("count: " + data.count)
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

车牌识别器,传统路由

检测图像中检测到的车牌中的字符

POST: https://:32168/v1/image/alpr

平台

全部,!Windows-Arm64

参数

  • upload (文件):用于 ALPR 的图像。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "message": (String) // A summary of the inference operation.
  "error": (String) // (Optional) An description of the error if success was false.
  "predictions": (Object[]) // An array of objects with the x_max, x_min, max, y_min bounds of the plate, label, the plate chars and confidence.
  "count": (Integer) // The number of objects found.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('upload', fileChooser.files[0]);

var url = 'https://:32168/v1/image/alpr';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("message: " + data.message)
                   console.log("error: " + data.error)
                   console.log("predictions: " + (nothing returned))
                   console.log("count: " + data.count)
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

车牌识别器 RKNN

检测图像中检测到的车牌中的字符

POST: https://:32168/v1/vision/alpr

平台

Orangepi, Radxarock

参数

  • upload (文件):用于 ALPR 的图像。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "message": (String) // A summary of the inference operation.
  "error": (String) // (Optional) An description of the error if success was false.
  "predictions": (Object[]) // An array of objects with the x_max, x_min, max, y_min bounds of the plate, label, the plate chars and confidence.
  "count": (Integer) // The number of objects found.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('upload', fileChooser.files[0]);

var url = 'https://:32168/v1/vision/alpr';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("message: " + data.message)
                   console.log("error: " + data.error)
                   console.log("predictions: " + (nothing returned))
                   console.log("count: " + data.count)
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

车牌识别器 RKNN,传统路由

检测图像中检测到的车牌中的字符

POST: https://:32168/v1/image/alpr

平台

Orangepi, Radxarock

参数

  • upload (文件):用于 ALPR 的图像。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "message": (String) // A summary of the inference operation.
  "error": (String) // (Optional) An description of the error if success was false.
  "predictions": (Object[]) // An array of objects with the x_max, x_min, max, y_min bounds of the plate, label, the plate chars and confidence.
  "count": (Integer) // The number of objects found.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('upload', fileChooser.files[0]);

var url = 'https://:32168/v1/image/alpr';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("message: " + data.message)
                   console.log("error: " + data.error)
                   console.log("predictions: " + (nothing returned))
                   console.log("count: " + data.count)
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

目标检测器 (Coral.AI)

检测图像中的多个对象。

POST: https://:32168/v1/vision/detection

目标检测模块使用 YOLO (You Only Look Once) 来定位和分类模型已训练的对象。目前可以检测 80 种不同类型的对象

  • 自行车、汽车、摩托车、飞机、公共汽车、火车、卡车、船
  • 交通灯、消防栓、停车标志、停车计时器、长凳
  • 猫、狗、马、羊、牛、大象、熊、斑马、长颈鹿
  • 背包、雨伞、手提包、领带、手提箱、飞盘、滑雪板、滑雪板、运动球、风筝、棒球棒、棒球手套、滑板、冲浪板、网球拍
  • 瓶子、酒杯、杯子、叉子、刀子、勺子、碗
  • 香蕉、苹果、三明治、橙子、西兰花、胡萝卜、热狗、披萨、甜甜圈、蛋糕
  • 椅子、沙发、盆栽、床、餐桌、马桶、电视显示器、笔记本电脑、鼠标、遥控器、键盘、手机、微波炉、烤箱、烤面包机、水槽、冰箱、书、时钟、花瓶、剪刀、泰迪熊、吹风机、牙刷

平台

全部

参数

  • image (文件):要分析的 HTTP 文件对象(图像)。

  • min_confidence (浮点数):检测对象的最低置信度。范围为 0.0 到 1.0。默认值为 0.4。可选。默认为 0.4

响应

JSON
{
  "success": (Boolean) // True if successful.
  "message": (String) // A summary of the inference operation.
  "error": (String) // (Optional) An description of the error if success was false.
  "predictions": (Object) // An array of objects with the x_max, x_min, max, y_min, label and confidence.
  "count": (Integer) // The number of objects found.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

import requests

image_data = open("my_image.jpg","rb").read()

response = requests.post("https://:32168/v1/vision/detection",
                         files={"image":image_data}).json()

for object in response["predictions"]:
    print(object["label"])

print(response)
JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);
formData.append("min_confidence", 0.4);

var url = 'https://:32168/v1/vision/detection';

fetch(url, { method: "POST", body: formData})
    .then(response => {
        if (response.ok) {
            response.json().then(data => {
                console.log("success: " + data.success)
                console.log("message: " + data.message)
                console.log("error: " + data.error)
                console.log("predictions: " + JSON.stringify(data.predictions))
                console.log("count: " + data.count)
                console.log("inferenceMs: " + data.inferenceMs)
                console.log("processMs: " + data.processMs)
                console.log("moduleId: " + data.moduleId)
                console.log("moduleName: " + data.moduleName)
                console.log("command: " + data.command)
                console.log("statusData: " + JSON.stringify(data.statusData))
                console.log("inferenceDevice: " + data.inferenceDevice)
                console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                console.log("processedBy: " + data.processedBy)
                console.log("timestampUTC: " + data.timestampUTC)
            })
        }
    });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
    });

自定义目标检测器 (Coral.AI)

基于 YOLO PyTorch 模型检测对象。模型存储在 /ObjectDetectionCoral/assets 目录中,要调用特定模型,请使用 /vision/custom/model-name,其中“model-name”是模型文件的名称

POST: https://:32168/v1/vision/custom

平台

全部

参数

  • image (文件):要分析的 HTTP 文件对象(图像)。

  • min_confidence (浮点数):检测对象的最低置信度。范围为 0.0 到 1.0。默认值为 0.4。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "message": (String) // A summary of the inference operation.
  "error": (String) // (Optional) An description of the error if success was false.
  "predictions": (Object[]) // An array of objects with the x_max, x_min, max, y_min, label and confidence.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);
formData.append("min_confidence", 0.0);

var url = 'https://:32168/v1/vision/custom';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("message: " + data.message)
                   console.log("error: " + data.error)
                   console.log("predictions: " + (nothing returned))
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

目标检测器列出自定义模型 (Coral.AI)

返回可用模型列表。

POST: https://:32168/v1/vision/custom/list

平台

全部

参数

(无)

响应

JSON
{
  "success": (Boolean) // True if successful.
  "models": (String) // An array of strings containing the names of the models installed.
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
var url = 'https://:32168/v1/vision/custom/list';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("models: " + data.models)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

目标检测 (YOLOv5 .NET)

目标检测模块使用 ML.NET 和 YOLO (You Only Look Once) 来定位和分类模型已训练的对象。目前可以检测 80 种不同类型的对象。

POST: https://:32168/v1/vision/detection

平台

全部,!Windows-Arm64

参数

  • image (文件):要分析的图像。

  • min_confidence (浮点数):检测对象的最低置信度。范围为 0.0 到 1.0。可选。默认为 0.4

响应

JSON
{
  "success": (Boolean) // True if successful.
  "message": (String) // A summary of the inference operation.
  "error": (String) // (Optional) An description of the error if success was false.
  "predictions": (Object) // An array of objects with the x_max, x_min, max, y_min, label and confidence.
  "count": (Integer) // The number of objects found.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);
formData.append("min_confidence", 0.4);

var url = 'https://:32168/v1/vision/detection';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("message: " + data.message)
                   console.log("error: " + data.error)
                   console.log("predictions: " + JSON.stringify(data.predictions))
                   console.log("count: " + data.count)
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

自定义目标检测器 (YOLOv5 .NET)

基于 YOLO PyTorch 模型检测对象。模型以 .pt 文件形式存储在 /ObjectDetectionYOLOv5Net/assets 目录中,要调用特定模型,请使用 /vision/custom/model-name,其中“model-name”是模型 .pt 文件的名称

POST: https://:32168/v1/vision/custom

目标检测模块通常使用 YOLO (You Only Look Once) 来定位和分类模型已训练的对象。默认包含的自定义模型有

  • ipcam-animal - 鸟、猫、狗、马、羊、牛、熊、鹿、兔子、浣熊、狐狸、臭鼬、松鼠、猪
  • ipcam-dark - 自行车、公共汽车、汽车、猫、狗、摩托车、人
  • ipcam-general - 人、车辆,以及 ipcam-dark 中的对象
  • ipcam-combined - 人、自行车、汽车、摩托车、公共汽车、卡车、鸟、猫、狗、马、羊、牛、熊、鹿、兔子、浣熊、狐狸、臭鼬、松鼠、猪

平台

全部,!Windows-Arm64

参数

  • image (文件):要分析的 HTTP 文件对象(图像)。

  • min_confidence (浮点数):检测对象的最低置信度。范围为 0.0 到 1.0。默认值为 0.4。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "message": (String) // A summary of the inference operation.
  "error": (String) // (Optional) An description of the error if success was false.
  "predictions": (Object) // An array of objects with the x_max, x_min, max, y_min, label and confidence.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);
formData.append("min_confidence", 0.0);

var url = 'https://:32168/v1/vision/custom';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("message: " + data.message)
                   console.log("error: " + data.error)
                   console.log("predictions: " + JSON.stringify(data.predictions))
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

目标检测器列出自定义模型 (YOLOv5 .NET)

返回可用模型列表。

POST: https://:32168/v1/vision/custom/list

平台

全部,!Windows-Arm64

参数

(无)

响应

JSON
{
  "success": (Boolean) // True if successful.
  "models": (String) // An array of strings containing the names of the models installed.
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
var url = 'https://:32168/v1/vision/custom/list';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("models: " + data.models)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

目标检测器 (YOLOv5 3.1)

检测图像中的 80 种不同类型的多个对象。

POST: https://:32168/v1/vision/detection

平台

全部,!Macos-Arm64

参数

  • image (文件):要分析的 HTTP 文件对象(图像)。

  • min_confidence (浮点数):检测对象的最低置信度。范围为 0.0 到 1.0。默认值为 0.4。可选。默认为 0.4

响应

JSON
{
  "success": (Boolean) // True if successful.
  "message": (String) // A summary of the inference operation.
  "error": (String) // (Optional) An description of the error if success was false.
  "predictions": (Object) // An array of objects with the x_max, x_min, max, y_min, label and confidence.
  "count": (Integer) // The number of objects found.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);
formData.append("min_confidence", 0.4);

var url = 'https://:32168/v1/vision/detection';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("message: " + data.message)
                   console.log("error: " + data.error)
                   console.log("predictions: " + JSON.stringify(data.predictions))
                   console.log("count: " + data.count)
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

自定义目标检测器 (YOLOv5 3.1)

基于 YOLO PyTorch 模型检测对象。模型以 .pt 文件形式存储在 /ObjectDetectionYOLOv5-3.1/custom-models 目录中,要调用特定模型,请使用 /vision/custom/model-name,其中“model-name”是模型 .pt 文件的名称

POST: https://:32168/v1/vision/custom

目标检测模块通常使用 YOLO (You Only Look Once) 来定位和分类模型已训练的对象。默认包含的自定义模型有

  • ipcam-animal - 鸟、猫、狗、马、羊、牛、熊、鹿、兔子、浣熊、狐狸、臭鼬、松鼠、猪
  • ipcam-dark - 自行车、公共汽车、汽车、猫、狗、摩托车、人
  • ipcam-general - 人、车辆,以及 ipcam-dark 中的对象
  • ipcam-combined - 人、自行车、汽车、摩托车、公共汽车、卡车、鸟、猫、狗、马、羊、牛、熊、鹿、兔子、浣熊、狐狸、臭鼬、松鼠、猪

平台

全部,!Macos-Arm64

参数

  • image (文件):要分析的 HTTP 文件对象(图像)。

  • min_confidence (浮点数):检测对象的最低置信度。范围为 0.0 到 1.0。默认值为 0.4。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "message": (String) // A summary of the inference operation.
  "error": (String) // (Optional) An description of the error if success was false.
  "predictions": (Object[]) // An array of objects with the x_max, x_min, max, y_min, label and confidence.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);
formData.append("min_confidence", 0.0);

var url = 'https://:32168/v1/vision/custom';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("message: " + data.message)
                   console.log("error: " + data.error)
                   console.log("predictions: " + (nothing returned))
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

目标检测器列出自定义模型 (YOLOv5 3.1)

返回可用模型列表。

POST: https://:32168/v1/vision/custom/list

平台

全部,!Macos-Arm64

参数

(无)

响应

JSON
{
  "success": (Boolean) // True if successful.
  "models": (String) // An array of strings containing the names of the models installed.
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
var url = 'https://:32168/v1/vision/custom/list';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("models: " + data.models)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

目标检测器 (YOLOv5 6.2)

检测图像中 80 种不同可能类型的多个对象。

POST: https://:32168/v1/vision/detection

平台

全部,!Raspberrypi,!Jetson

参数

  • image (文件):要分析的 HTTP 文件对象(图像)。

  • min_confidence (浮点数):检测对象的最低置信度。范围为 0.0 到 1.0。默认值为 0.4。可选。默认为 0.4

响应

JSON
{
  "success": (Boolean) // True if successful.
  "message": (String) // A summary of the inference operation.
  "error": (String) // (Optional) An description of the error if success was false.
  "predictions": (Object[]) // An array of objects with the x_max, x_min, max, y_min, label and confidence.
  "count": (Integer) // The number of objects found.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);
formData.append("min_confidence", 0.4);

var url = 'https://:32168/v1/vision/detection';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("message: " + data.message)
                   console.log("error: " + data.error)
                   console.log("predictions: " + (nothing returned))
                   console.log("count: " + data.count)
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

自定义目标检测器 (YOLOv5 6.2)

基于 YOLO PyTorch 模型检测对象。模型以 .pt 文件形式存储在 /ObjectDetectionYOLOv5-6.2/assets 目录中,要调用特定模型,请使用 /vision/custom/model-name,其中“model-name”是模型 .pt 文件的名称

POST: https://:32168/v1/vision/custom

目标检测模块通常使用 YOLO (You Only Look Once) 来定位和分类模型已训练的对象。默认包含的自定义模型有

  • ipcam-animal - 鸟、猫、狗、马、羊、牛、熊、鹿、兔子、浣熊、狐狸、臭鼬、松鼠、猪
  • ipcam-dark - 自行车、公共汽车、汽车、猫、狗、摩托车、人
  • ipcam-general - 人、车辆,以及 ipcam-dark 中的对象
  • ipcam-combined - 人、自行车、汽车、摩托车、公共汽车、卡车、鸟、猫、狗、马、羊、牛、熊、鹿、兔子、浣熊、狐狸、臭鼬、松鼠、猪

平台

全部,!Raspberrypi,!Jetson

参数

  • image (文件):要分析的 HTTP 文件对象(图像)。

  • min_confidence (浮点数):检测对象的最低置信度。范围为 0.0 到 1.0。默认值为 0.4。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "message": (String) // A summary of the inference operation.
  "error": (String) // (Optional) An description of the error if success was false.
  "predictions": (Object[]) // An array of objects with the x_max, x_min, max, y_min, label and confidence.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);
formData.append("min_confidence", 0.0);

var url = 'https://:32168/v1/vision/custom';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("message: " + data.message)
                   console.log("error: " + data.error)
                   console.log("predictions: " + (nothing returned))
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

目标检测器列出自定义模型 (YOLOv5 6.2)

返回可用模型列表。

POST: https://:32168/v1/vision/custom/list

平台

全部,!Raspberrypi,!Jetson

参数

(无)

响应

JSON
{
  "success": (Boolean) // True if successful.
  "models": (String) // An array of strings containing the names of the models installed.
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
var url = 'https://:32168/v1/vision/custom/list';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("models: " + data.models)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

目标检测器 (YOLOv5 RKNN)

检测图像中的多个对象。

POST: https://:32168/v1/vision/detection

平台

Orangepi, Radxarock

参数

  • image (文件):要分析的 HTTP 文件对象(图像)。

  • min_confidence (浮点数):检测对象的最低置信度。范围为 0.0 到 1.0。默认值为 0.3。可选。默认为 0.30

响应

JSON
{
  "success": (Boolean) // True if successful.
  "message": (String) // A summary of the inference operation.
  "error": (String) // (Optional) An description of the error if success was false.
  "predictions": (Object) // An array of objects with the x_max, x_min, max, y_min, label and confidence.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);
formData.append("min_confidence", 0.30);

var url = 'https://:32168/v1/vision/detection';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("message: " + data.message)
                   console.log("error: " + data.error)
                   console.log("predictions: " + JSON.stringify(data.predictions))
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

自定义目标检测器 (YOLOv5 RKNN)

基于 YOLO PyTorch 模型检测对象。模型以 .rknn 文件形式存储在 /ObjectDetectionYoloRKNN/custom-models 目录中,要调用特定模型,请使用 /vision/custom/model-name,其中“model-name”是模型 .rknn 文件的名称

POST: https://:32168/v1/vision/custom

平台

Orangepi, Radxarock

参数

  • image (文件):要分析的 HTTP 文件对象(图像)。

  • min_confidence (浮点数):检测对象的最低置信度。范围为 0.0 到 1.0。默认值为 0.4。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "message": (String) // A summary of the inference operation.
  "error": (String) // (Optional) An description of the error if success was false.
  "predictions": (Object[]) // An array of objects with the x_max, x_min, max, y_min, label and confidence.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);
formData.append("min_confidence", 0.0);

var url = 'https://:32168/v1/vision/custom';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("message: " + data.message)
                   console.log("error: " + data.error)
                   console.log("predictions: " + (nothing returned))
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

目标检测器列出自定义模型 (YOLOv5 RKNN)

返回可用模型列表。

POST: https://:32168/v1/vision/custom/list

平台

Orangepi, Radxarock

参数

(无)

响应

JSON
{
  "success": (Boolean) // True if successful.
  "models": (String) // An array of strings containing the names of the models installed.
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
var url = 'https://:32168/v1/vision/custom/list';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("models: " + data.models)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

目标检测器 (YOLOv8)

检测图像中 80 种不同可能类型的多个对象。

POST: https://:32168/v1/vision/detection

平台

全部

参数

  • image (文件):要分析的 HTTP 文件对象(图像)。

  • min_confidence (浮点数):检测对象的最低置信度。范围为 0.0 到 1.0。默认值为 0.4。可选。默认为 0.4

响应

JSON
{
  "success": (Boolean) // True if successful.
  "message": (String) // A summary of the inference operation.
  "error": (String) // (Optional) An description of the error if success was false.
  "predictions": (Object[]) // An array of objects with the x_max, x_min, max, y_min, label and confidence.
  "count": (Integer) // The number of objects found.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);
formData.append("min_confidence", 0.4);

var url = 'https://:32168/v1/vision/detection';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("message: " + data.message)
                   console.log("error: " + data.error)
                   console.log("predictions: " + (nothing returned))
                   console.log("count: " + data.count)
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

自定义目标检测器 (YOLOv8)

基于 YOLO PyTorch 模型检测对象。模型以 .pt 文件形式存储在 /ObjectDetectionYOLOv8/assets 目录中,要调用特定模型,请使用 /vision/custom/model-name,其中“model-name”是模型 .pt 文件的名称

POST: https://:32168/v1/vision/custom

平台

全部

参数

  • image (文件):要分析的 HTTP 文件对象(图像)。

  • min_confidence (浮点数):检测对象的最低置信度。范围为 0.0 到 1.0。默认值为 0.4。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "message": (String) // A summary of the inference operation.
  "error": (String) // (Optional) An description of the error if success was false.
  "predictions": (Object[]) // An array of objects with the x_max, x_min, max, y_min, mask, label and confidence.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);
formData.append("min_confidence", 0.0);

var url = 'https://:32168/v1/vision/custom';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("message: " + data.message)
                   console.log("error: " + data.error)
                   console.log("predictions: " + (nothing returned))
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

目标分割 (YOLOv8)

分割图像中的多个对象。

POST: https://:32168/v1/vision/segmentation

平台

全部

参数

  • image (文件):要分析的 HTTP 文件对象(图像)。

  • min_confidence (浮点数):检测对象的最低置信度。范围为 0.0 到 1.0。默认值为 0.4。可选。默认为 0.4

响应

JSON
{
  "success": (Boolean) // True if successful.
  "message": (String) // A summary of the inference operation.
  "error": (String) // (Optional) An description of the error if success was false.
  "predictions": (Object[]) // An array of objects with the x_max, x_min, max, y_min, label and confidence.
  "count": (Integer) // The number of objects found.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);
formData.append("min_confidence", 0.4);

var url = 'https://:32168/v1/vision/segmentation';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("message: " + data.message)
                   console.log("error: " + data.error)
                   console.log("predictions: " + (nothing returned))
                   console.log("count: " + data.count)
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

目标检测器列出自定义模型 (YOLOv8)

返回可用模型列表。

POST: https://:32168/v1/vision/custom/list

平台

全部

参数

(无)

响应

JSON
{
  "success": (Boolean) // True if successful.
  "models": (String) // An array of strings containing the names of the models installed.
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
var url = 'https://:32168/v1/vision/custom/list';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("models: " + data.models)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

光学字符识别

检测图像中的文本

POST: https://:32168/v1/vision/ocr

平台

全部,!Windows-Arm64

参数

  • upload (文件):用于 OCR 的图像。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "predictions": (Object[]) // An array of objects with the x_max, x_min, max, y_min, label and confidence.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('upload', fileChooser.files[0]);

var url = 'https://:32168/v1/vision/ocr';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("predictions: " + (nothing returned))
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

光学字符识别,传统路由

检测图像中的文本

POST: https://:32168/v1/image/ocr

平台

全部,!Windows-Arm64

参数

  • upload (文件):用于 OCR 的图像。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "predictions": (Object[]) // An array of objects with the x_max, x_min, max, y_min, label and confidence.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('upload', fileChooser.files[0]);

var url = 'https://:32168/v1/image/ocr';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("predictions: " + (nothing returned))
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

场景分类器

对图像中的场景进行分类。它可以识别 365 种不同的场景。

POST: https://:32168/v1/vision/scene

平台

全部,!Jetson

参数

  • image (文件):要分析的 HTTP 文件对象(图像)。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "label": (Text) // The classification of the scene such as 'conference_room'.
  "confidence": (Float) // The confidence in the classification in the range of 0.0 to 1.0.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);

var url = 'https://:32168/v1/vision/scene';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("label: " + data.label)
                   console.log("confidence: " + data.confidence.toFixed(2))
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

人脸识别

人脸检测

检测图像中的人脸并返回人脸的坐标。

POST: https://:32168/v1/vision/face

平台

全部,!Jetson

参数

  • image (文件):要分析的 HTTP 文件对象(图像)。

  • min_confidence (浮点数):检测对象的最低置信度。范围为 0.0 到 1.0。可选。默认为 0.4

响应

JSON
{
  "success": (Boolean) // True if successful.
  "message": (String) // A summary of the inference operation.
  "error": (String) // (Optional) An description of the error if success was false.
  "predictions": (Object) // An array of objects with the x_max, x_min, max, y_min, label and confidence.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);
formData.append("min_confidence", 0.4);

var url = 'https://:32168/v1/vision/face';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("message: " + data.message)
                   console.log("error: " + data.error)
                   console.log("predictions: " + JSON.stringify(data.predictions))
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

人脸比对

比较两张图像中的两张人脸,并返回一个值,指示人脸的相似度。

POST: https://:32168/v1/vision/face/match

平台

全部,!Jetson

参数

  • image1 (文件):要分析的第一个 HTTP 文件对象(图像)。

  • image2 (文件):要分析的第二个 HTTP 文件对象(图像)。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "similarity": (Float) // How similar the two images are, in the range of 0.0 to 1.0.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image1', fileChooser.files[0]);
formData.append('image2', fileChooser.files[1]);

var url = 'https://:32168/v1/vision/face/match';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("similarity: " + data.similarity.toFixed(2))
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

列出已注册人脸

列出人脸识别数据库中注册图像的用户。

POST: https://:32168/v1/vision/face/list

平台

全部,!Jetson

参数

(无)

响应

JSON
{
  "success": (Boolean) // True if successful.
  "faces": (Object) // An array of the userid strings for users with registered images.
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
var url = 'https://:32168/v1/vision/face/list';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("faces: " + JSON.stringify(data.faces))
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

注册人脸

为用户注册一张或多张图像以进行识别。这将训练人脸识别模型,并允许人脸识别根据您提供的可能包含或不包含该用户人脸的图像返回 userId。

POST: https://:32168/v1/vision/face/register

平台

全部,!Jetson

参数

  • imageN (文件):要注册的一个或多个 HTTP 文件对象(图像)。

  • userid (文本):用户的识别字符串。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "Message": (Text) // face added
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('imageN', fileChooser.files[0]);
formData.append("userid", '');

var url = 'https://:32168/v1/vision/face/register';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("Message: " + data.Message)
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

删除已注册人脸

从人脸注册数据库中删除用户 ID 和图像。

POST: https://:32168/v1/vision/face/delete

平台

全部,!Jetson

参数

  • userid (文本):用户的识别字符串。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
var formData = new FormData();
formData.append("userid", '');

var url = 'https://:32168/v1/vision/face/delete';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

人脸识别

识别图像中的所有人脸,并返回图像中每张人脸的用户 ID 和坐标。如果检测到新(未注册)人脸,则不会返回该人脸的用户 ID。

POST: https://:32168/v1/vision/face/recognize

平台

全部,!Jetson

参数

  • image (文件):要分析的 HTTP 文件对象(图像)。

  • min_confidence (浮点数):检测对象的最低置信度。范围为 0.0 到 1.0。可选。默认为 0.4

响应

JSON
{
  "success": (Boolean) // True if successful.
  "predictions": (Object) // An array of objects with the userid, x_max, x_min, max, y_min, label and confidence.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);
formData.append("min_confidence", 0.4);

var url = 'https://:32168/v1/vision/face/recognize';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("predictions: " + JSON.stringify(data.predictions))
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

生成式 AI

LlamaChat

使用 Llama LLM 回答简单的基于维基的问题。

POST: https://:32168/v1/text/chat

平台

全部,!Windows-Arm64,!Raspberrypi,!Orangepi,!Radxarock,!Jetson

参数

  • prompt (文本):用于生成文本的提示

  • system_prompt (文本):助手描述

  • max_tokens (整数):要生成的最大令牌数

  • temperature (浮点数):用于采样的温度

响应

JSON
{
  "success": (Boolean) // True if successful.
  "reply": (Text) // The reply from the model.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
var formData = new FormData();
formData.append("prompt", '');
formData.append("system_prompt", '');
formData.append("max_tokens", 0);
formData.append("temperature", 0.0);

var url = 'https://:32168/v1/text/chat';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("reply: " + data.reply)
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

文本到图像稳定扩散

根据输入提示创建图像

POST: https://:32168/v1/text2image/create

平台

Windows、macOS、Linux

参数

  • prompt (文本):Stable Diffusion 用于创建图像的提示。

  • negative_prompt (文本):Stable Diffusion 用于创建图像的负面提示。

  • num_images (整数):要生成的图像数量。默认为 1。可选。默认为 1

  • num_steps (整数):要运行的推理步骤数量。可选。默认为 20

  • seed (整数):用于扩散随机数生成器的种子。默认为随机种子。

  • width (整数):要创建的图像宽度。默认为 512。必须是 8 的倍数。可选。默认为 512

  • height (整数):要创建的图像高度。必须是 8 的倍数。可选。默认为 512

  • guidance_scale (浮点数):图像生成过程与提示的对齐程度(0.0 - 20.0)。默认为 7.0。可选。默认为 7.0

响应

JSON
{
  "success": (Boolean) // True if successful.
  "images": (File[]) // An array of 1-4 images generated.
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
var formData = new FormData();
formData.append("prompt", '');
formData.append("negative_prompt", '');
formData.append("num_images", 1);
formData.append("num_steps", 20);
formData.append("seed", 0);
formData.append("width", 512);
formData.append("height", 512);
formData.append("guidance_scale", 7.0);

var url = 'https://:32168/v1/text2image/create';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("images: " + (nothing returned))
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

图像处理

背景移除器

从图像中移除主体背景。

POST: https://:32168/v1/image/removebackground

平台

全部,!Linux,!Raspberrypi,!Orangepi,!Radxarock,!Jetson

参数

  • image (文件):要移除背景的图像。

  • use_alphamatting (布尔值):是否使用 alpha 蒙版。可选。默认为 false

响应

JSON
{
  "success": (Boolean) // True if successful.
  "imageBase64": (Base64ImageData) // The base64 encoded image that has had its background removed.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);
formData.append("use_alphamatting", false);

var url = 'https://:32168/v1/image/removebackground';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   // Assume we have an IMG tag named img1
                   img1.src = "data:image/png;base64," + data.imageBase64;
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

卡通化器

将照片转换为动漫风格的卡通。

POST: https://:32168/v1/image/cartoonize

平台

全部,!Raspberrypi,!Orangepi,!Radxarock,!Jetson

参数

  • image (文件):要转换的图像。

  • model (字符串):要使用的模型名称

响应

JSON
{
  "success": (Boolean) // True if successful.
  "imageBase64": (Base64ImageData) // The base64 encoded image.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);
formData.append("model", );

var url = 'https://:32168/v1/image/cartoonize';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   // Assume we have an IMG tag named img1
                   img1.src = "data:image/png;base64," + data.imageBase64;
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

人像滤镜

模糊图像中主体背景。

POST: https://:32168/v1/image/portraitfilter

平台

Windows, Windows-Arm64

参数

  • image (文件):要过滤的图像。

  • strength (浮点数):背景模糊程度 (0.0 - 1.0)。可选。默认为 0.5

响应

JSON
{
  "success": (Boolean) // True if successful.
  "filtered_image": (Base64ImageData) // The base64 encoded image that has had its background blurred.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);
formData.append("strength", 0.5);

var url = 'https://:32168/v1/image/portraitfilter';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   // Assume we have an IMG tag named img1
                   img1.src = "data:image/png;base64," + data.filtered_image;
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

超分辨率

使用 AI 提高图像分辨率,确保不引入模糊。

POST: https://:32168/v1/image/superresolution

平台

全部

参数

  • image (文件):要提高分辨率的图像。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "imageBase64": (Base64ImageData) // The base64 encoded image that has had its resolution increased.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
// Assume we have a HTML INPUT type=file control with ID=fileChooser
var formData = new FormData();
formData.append('image', fileChooser.files[0]);

var url = 'https://:32168/v1/image/superresolution';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   // Assume we have an IMG tag named img1
                   img1.src = "data:image/png;base64," + data.imageBase64;
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

自然语言

情感分析

确定所提供文本是积极还是消极情感

POST: https://:32168/v1/text/sentiment

平台

Windows、macOS

参数

  • text (文本):要分析的文本。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "is_positive": (Boolean) // Whether the input text had a positive sentiment.
  "positive_probability": (Float) // The probability the input text has a positive sentiment.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and text manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
var formData = new FormData();
formData.append("text", '');

var url = 'https://:32168/v1/text/sentiment';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("is_positive: " + data.is_positive)
                   console.log("positive_probability: " + data.positive_probability.toFixed(2))
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

文本摘要

通过选择最能代表内容的句子数量来总结内容。

POST: https://:32168/v1/text/summarize

平台

全部

参数

  • text (文本):要总结的文本

  • num_sentences (整数):要生成的句子数量。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "summary": (Text) // The summarized text.
  "inferenceMs": (Integer) // The time (ms) to perform the AI inference.
  "processMs": (Integer) // The time (ms) to process the image (includes inference and image manipulation operations).
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
var formData = new FormData();
formData.append("text", '');
formData.append("num_sentences", 0);

var url = 'https://:32168/v1/text/summarize';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("summary: " + data.summary)
                   console.log("inferenceMs: " + data.inferenceMs)
                   console.log("processMs: " + data.processMs)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

培训

创建自定义数据集

从 Open Images 存储库创建自定义数据集。

POST: https://:32168/v1/train/create_dataset

平台

全部,!Raspberrypi,!Orangepi,!Radxarock,!Jetson

参数

  • dataset_name (字符串):数据集的名称。

  • classes (字符串):要包含在数据集中的类别列表,以逗号分隔。

  • num_images (整数):每个类别要包含的最大图像数量。默认 100。可选。默认为 100

响应

JSON
{
  "success": (Boolean) // True if creating a dataset started.
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
var formData = new FormData();
formData.append("dataset_name", null);
formData.append("classes", null);
formData.append("num_images", 100);

var url = 'https://:32168/v1/train/create_dataset';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

训练自定义模型 (YOLOv5 6.2)

从自定义数据集创建自定义模型。

POST: https://:32168/v1/train/train_model

平台

全部,!Raspberrypi,!Orangepi,!Radxarock,!Jetson

参数

  • name (字符串):模型的名称。

  • dataset (字符串):数据集的名称。

  • num_epochs (整数):训练模型的 epoch。默认 100。可选。默认为 100

  • device (字符串):无或 'cpu' 或 0 或 '0' 或 '0,1,2,3'。默认值:''

  • batch (整数):批量大小。默认值:8 可选。默认为 8

  • freeze (整数):要冻结的层,0-无,10-骨干,24-全部 可选。默认为 0

  • hyp (整数):超参数:0-微调 (VOC),1-从头开始低,2-从头开始中,3-从头开始高 可选。默认为 0

响应

JSON
{
  "success": (Boolean) // True if training started.
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
var formData = new FormData();
formData.append("name", null);
formData.append("dataset", null);
formData.append("num_epochs", 100);
formData.append("device", );
formData.append("batch", 8);
formData.append("freeze", 0);
formData.append("hyp", 0);

var url = 'https://:32168/v1/train/train_model';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

恢复训练模型

恢复模型训练。

POST: https://:32168/v1/train/resume_training

平台

全部,!Raspberrypi,!Orangepi,!Radxarock,!Jetson

参数

  • model_name (字符串):模型的名称。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
var formData = new FormData();
formData.append("model_name", null);

var url = 'https://:32168/v1/train/resume_training';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

获取模型信息 (YOLOv5 6.2)

获取模型信息。

POST: https://:32168/v1/train/model_info

平台

全部,!Raspberrypi,!Orangepi,!Radxarock,!Jetson

参数

  • model_name (字符串):模型的名称。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "model_name": (String) // The name of the model.
  "complete": (Boolean) // True if the training was completed, can restart if not.
  "training_dir": (String) // The training directory containing the custom model file and the training results.
  "model_path": (String) // The path to best the custom model file.
  "results_graph_path": (String) // The path the results.png file if it exists.
  "results_csv_path": (String) // The path the results.csv file if it exists.
  "pr_curve_path": (String) // The path PR_curve.png file if it exists.
  "results_graph_image": (Base64ImageData) // The base64 encoded image of the result graphs.
  "pr_curve_image": (Base64ImageData) // The base64 encoded image of the PR Curve graph.
  "results_csv_file": (Base64ImageData) // The base64 encoded data for the results.csv file.
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
var formData = new FormData();
formData.append("model_name", null);

var url = 'https://:32168/v1/train/model_info';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("model_name: " + data.model_name)
                   console.log("complete: " + data.complete)
                   console.log("training_dir: " + data.training_dir)
                   console.log("model_path: " + data.model_path)
                   console.log("results_graph_path: " + data.results_graph_path)
                   console.log("results_csv_path: " + data.results_csv_path)
                   console.log("pr_curve_path: " + data.pr_curve_path)
                   // Assume we have an IMG tag named img1
                   img1.src = "data:image/png;base64," + data.results_graph_image;
                   // Assume we have an IMG tag named img2
                   img2.src = "data:image/png;base64," + data.pr_curve_image;
                   // Assume we have an IMG tag named img3
                   img3.src = "data:image/png;base64," + data.results_csv_file;
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

获取数据集信息 (YOLOv5 6.2)

获取数据集信息。

POST: https://:32168/v1/train/dataset_info

平台

全部,!Raspberrypi,!Orangepi,!Radxarock,!Jetson

参数

  • dataset_name (字符串):数据集的名称。

响应

JSON
{
  "success": (Boolean) // True if successful.
  "complete": (Boolean) // True if the training was completed, can restart if not.
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
var formData = new FormData();
formData.append("dataset_name", null);

var url = 'https://:32168/v1/train/dataset_info';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("complete: " + data.complete)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });

获取可用类别)

获取可用于创建数据集的类别列表。

POST: https://:32168/v1/train/list_classes

平台

全部,!Raspberrypi,!Orangepi,!Radxarock,!Jetson

参数

(无)

响应

JSON
{
  "success": (Boolean) // True if successful.
  "moduleId": (String) // The Id of the module that processed this request.
  "moduleName": (String) // The name of the module that processed this request.
  "command": (String) // The command that was sent as part of this request. Can be detect, list, status.
  "statusData": (Object) // [Optional] An object containing (if available) the current module status data.
  "inferenceDevice": (String) // The name of the device handling the inference. eg CPU, GPU, TPU
  "analysisRoundTripMs": (Integer) // The time (ms) for the round trip to the analysis module and back.
  "processedBy": (String) // The hostname of the server that processed this request.
  "timestampUTC": (String) // The timestamp (UTC) of the response.
}

示例

JavaScript
var url = 'https://:32168/v1/train/list_classes';

fetch(url, { method: "POST", body: formData})
      .then(response => {
           if (response.ok) {
               response.json().then(data => {
                   console.log("success: " + data.success)
                   console.log("moduleId: " + data.moduleId)
                   console.log("moduleName: " + data.moduleName)
                   console.log("command: " + data.command)
                   console.log("statusData: " + JSON.stringify(data.statusData))
                   console.log("inferenceDevice: " + data.inferenceDevice)
                   console.log("analysisRoundTripMs: " + data.analysisRoundTripMs)
                   console.log("processedBy: " + data.processedBy)
                   console.log("timestampUTC: " + data.timestampUTC)
               })
           }
       });
        .catch (error => {
            console.log('Unable to complete API call: ' + error);
       });


设置

更改设置

`POST: localhost:32168/v1/settings/<ModuleId>`
设置给定模块的某个设置的值。

平台

全部

参数

  • name - 设置的名称。请参阅模块设置以了解可以更改的模块和全局设置。
  • value - 设置的新值。

响应

JSON
{
  "success": (Boolean) // True if successful.
}

请注意,调用此 API 时,服务器将自动重新启动给定模块,并且更改将在服务器和模块重新启动后持久保存。

列出设置

`GET: localhost:32168/v1/settings/<ModuleId>`
获取给定模块的所有设置。

平台

全部

响应

响应将是一个 Json 对象,包含通用的“success”属性,以及两个集合:`environmentVariables` 是传递给模块代码的环境变量,`settings` 定义模块如何启动。

JSON
{
    "success": true,

    "environmentVariables": {
        "CPAI_APPROOTPATH": "C:\\Program Files\\CodeProject\\AI",
        "CPAI_PORT": "32168",
        "APPDIR": "%CURRENT_MODULE_PATH%",
        "CPAI_HALF_PRECISION": "enable",
        "CPAI_MODULE_SUPPORT_GPU": "False",
        "CUSTOM_MODELS_DIR": "%CURRENT_MODULE_PATH%\\custom-models",
        "MODELS_DIR": "%CURRENT_MODULE_PATH%\\assets",
        "MODEL_SIZE": "medium",
        "USE_CUDA": "False",
        "YOLOV5_AUTOINSTALL": "false",
        "YOLOV5_VERBOSE": "false"
    },

    "settings": {
        "autostart": true,
        "supportGPU": true,
        "logVerbosity": null,
        "halfPrecision": "enable",
        "parallelism": 0,
        "postStartPauseSecs": 1
    }
}

状态

服务器日志

GET: /v1/log/list?count=<count>&last_id=<lastid>

获取最多 20 条日志条目,从 id = 开始。" 值可以省略。返回的是一个条目数组。返回的是一个条目数组

平台

Windows、Linux、macOS、macOS-Arm、Docker

参数

  • lastid - 已检索到的最后一个条目的 ID,以便仅发送新的日志条目
  • count - 要返回的条目数量

响应

JSON
{
    "id": Integer, The id of the log entry
    "timestamp": A datetime value. The timestamp as UTC time of the log entry
    "entry": Text. The text of the entry itself
}

服务器 Ping

服务器 ping。只是为了让你更容易知道它是否活着

GET: /v1/status/ping
平台

Windows、Linux、macOS、macOS-Arm、Docker

响应

JSON
{ 
    "success": true 
}

如果一切都好。

服务器版本

返回服务器当前版本

GET: /v1/status/version
平台

Windows、Linux、macOS、macOS-Arm、Docker

响应

JSON
{
    "success": true,
    "version": {
        "major": 2,
        "minor": 2,
        "patch": 4,
        "preRelease": "Beta",
        "securityUpdate": false,
        "build": 0,
        "file": "CodeProject.AI.Server-2.2.4.zip",
        "releaseNotes": "Features and improvements"
    },
    "message": "2.2.4-Beta"
}

服务器更新可用

关于是否有可用更新的说明

GET: /v1/status/updateavailable
平台

Windows、Linux、macOS、macOS-Arm、Docker

响应

JSON
{
    "success"         : true/false,
    "message"         : "An update to version X  is available" / "You have the latest",
    "version"         : <version object>, // [Deprecated] The latest available version
    "current"         : <version object>, // The current installed version
    "latest"          : <version object>, // The latest available version
    "updateAvailable" : true/false
};

版本对象的位置

JSON
"versionInfo": {
    "major": 2,
    "minor": 2,
    "patch": 4,
    "preRelease": "Beta",
    "securityUpdate": false,
    "build": 0,
    "file": "CodeProject.AI.Server-2.2.4.zip",
    "releaseNotes": "Features and improvements."
}

© . All rights reserved.