From 329ccde333ee16058276c40f730c9aa13d610266 Mon Sep 17 00:00:00 2001 From: gitea_admin Date: Wed, 11 Mar 2026 14:42:51 +0000 Subject: [PATCH] Update wiki Home page for torchserve --- Home.md | 54 ++++++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 40 insertions(+), 14 deletions(-) diff --git a/Home.md b/Home.md index 93a012c..be86159 100644 --- a/Home.md +++ b/Home.md @@ -1,21 +1,47 @@ -# Deploy the Project on CamelX Platform +# TorchServe -Deploy on CamelX Platform in three steps +Provide access to PyTorch TorchServe servers to run inference with PyTorch models remotely -## Step 1: Create a release -From the project space, click on **"Create a release"** +## Metadata -The new version is automatically available in the list +| Property | Value | +|----------|-------| +| Scheme | `torchserve` | +| Support Level | Preview | +| Labels | ai | +| Version | 4.10.2 | -## Step 2: Deploy -Click on **"Deploy"** +## Maven Dependency -- **Version:** Select the desired release -- **Environment:** Choose `Development`, `Staging`, or `Production` -- **Configuration:** Select the configuration source -- **Resources:** Set CPU and Memory +```xml + + org.apache.camel + camel-torchserve + 4.10.2 + +``` -## Step 3: Expose -Enable **"Expose"** +## Endpoint Properties -Choose an **API Gateway** (Internal, Public, etc.) +| Name | Type | Required | Default | Description | +|------|------|----------|---------|-------------| +| `api` | string | ✓ | | The TorchServe API | +| `operation` | string | ✓ | | The API operation | +| `modelName` | string | | | The name of model. | +| `modelVersion` | string | | | The version of model. | +| `lazyStartProducer` | boolean | | `false` | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | +| `inferenceAddress` | string | | | The address of the inference API endpoint. | +| `inferencePort` | integer | | `8080` | The port of the inference API endpoint. | +| `listLimit` | integer | | `100` | The maximum number of items to return for the list operation. When this value is present, TorchServe does not return more than the specified number of items, but it might return fewer. This value is optional. If you include a value, it must be between 1 and 1000, inclusive. If you do not include a value, it defaults to 100. | +| `listNextPageToken` | string | | | The token to retrieve the next set of results for the list operation. TorchServe provides the token when the response from a previous call has more results than the maximum page size. | +| `managementAddress` | string | | | The address of the management API endpoint. | +| `managementPort` | integer | | `8081` | The port of the management API endpoint. | +| `registerOptions` | object | | | Additional options for the register operation. | +| `scaleWorkerOptions` | object | | | Additional options for the scale-worker operation. | +| `unregisterOptions` | object | | | Additional options for the unregister operation. | +| `url` | string | | | Model archive download url, support local file or HTTP(s) protocol. For S3, consider using pre-signed url. | +| `metricsAddress` | string | | | The address of the metrics API endpoint. | +| `metricsName` | string | | | Names of metrics to filter. | +| `metricsPort` | integer | | `8082` | The port of the metrics API endpoint. | +| `inferenceKey` | string | | | The token authorization key for accessing the inference API. | +| `managementKey` | string | | | The token authorization key for accessing the management API. |