generated from camel.apache.org/component
2
Home
gitea_admin edited this page 2026-03-11 14:42:46 +00:00
Table of Contents
TensorFlow Serving
Provide access to TensorFlow Serving model servers to run inference with TensorFlow saved models remotely
Metadata
| Property | Value |
|---|---|
| Scheme | tensorflow-serving |
| Support Level | Preview |
| Labels | ai |
| Version | 4.10.2 |
Maven Dependency
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-tensorflow-serving</artifactId>
<version>4.10.2</version>
</dependency>
Endpoint Properties
| Name | Type | Required | Default | Description |
|---|---|---|---|---|
api |
string | ✓ | The TensorFlow Serving API | |
modelName |
string | Required servable name. | ||
modelVersion |
integer | Optional choice of which version of the model to use. Use this specific version number. | ||
modelVersionLabel |
string | Optional choice of which version of the model to use. Use the version associated with the given label. | ||
signatureName |
string | A named signature to evaluate. If unspecified, the default signature will be used. | ||
target |
string | localhost:8500 |
The target URI of the client. See: https://grpc.github.io/grpc-java/javadoc/io/grpc/Grpc.html#newChannelBuilder%28java.lang.String,io.grpc.ChannelCredentials%29 | |
lazyStartProducer |
boolean | false |
Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | |
credentials |
object | The credentials of the client. |