TorchServe
Since Camel 4.9
Only producer is supported
The TorchServe component provides support for invoking the TorchServe REST API. It enables Camel to access PyTorch TorchServe servers to run inference with PyTorch models remotely.
To use the TorchServe component, Maven users will need to add the following dependency to their pom.xml
:
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-torchserve</artifactId>
<version>x.x.x</version>
<!-- use the same version as your Camel core version -->
</dependency>
URI format
torchserve:api/operation[?options]
Where api
represents one of the TorchServe REST API, and operation
represents a specific operation supported by the API.
Configuring Options
Camel components are configured on two separate levels:
-
component level
-
endpoint level
Configuring Component Options
At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level.
For example, a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
You can configure components using:
-
the Component DSL.
-
in a configuration file (
application.properties
,*.yaml
files, etc). -
directly in the Java code.
Configuring Endpoint Options
You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java.
A good practice when configuring options is to use Property Placeholders.
Property placeholders provide a few benefits:
-
They help prevent using hardcoded urls, port numbers, sensitive information, and other settings.
-
They allow externalizing the configuration from the code.
-
They help the code to become more flexible and reusable.
The following two sections list all the options, firstly for the component followed by the endpoint.
Component Options
The TorchServe component supports 22 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
The configuration. | TorchServeConfiguration | ||
The name of model. | String | ||
The version of model. | String | ||
Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean | |
Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean | |
Used for enabling or disabling all consumer based health checks from this component. | true | boolean | |
Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true. | true | boolean | |
The address of the inference API endpoint. | String | ||
The port of the inference API endpoint. | 8080 | int | |
The maximum number of items to return for the list operation. When this value is present, TorchServe does not return more than the specified number of items, but it might return fewer. This value is optional. If you include a value, it must be between 1 and 1000, inclusive. If you do not include a value, it defaults to 100. | 100 | int | |
The token to retrieve the next set of results for the list operation. TorchServe provides the token when the response from a previous call has more results than the maximum page size. | String | ||
The address of the management API endpoint. | String | ||
The port of the management API endpoint. | 8081 | int | |
Additional options for the register operation. | RegisterOptions | ||
Additional options for the scale-worker operation. | ScaleWorkerOptions | ||
Additional options for the unregister operation. | UnregisterOptions | ||
Model archive download url, support local file or HTTP(s) protocol. For S3, consider using pre-signed url. | String | ||
The address of the metrics API endpoint. | String | ||
Names of metrics to filter. | String | ||
The port of the metrics API endpoint. | 8082 | int | |
The token authorization key for accessing the inference API. | String | ||
The token authorization key for accessing the management API. | String |
Endpoint Options
The TorchServe endpoint is configured using URI syntax:
torchserve:api/operation
With the following path and query parameters:
Query Parameters (18 parameters)
Name | Description | Default | Type |
---|---|---|---|
The name of model. | String | ||
The version of model. | String | ||
Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean | |
The address of the inference API endpoint. | String | ||
The port of the inference API endpoint. | 8080 | int | |
The maximum number of items to return for the list operation. When this value is present, TorchServe does not return more than the specified number of items, but it might return fewer. This value is optional. If you include a value, it must be between 1 and 1000, inclusive. If you do not include a value, it defaults to 100. | 100 | int | |
The token to retrieve the next set of results for the list operation. TorchServe provides the token when the response from a previous call has more results than the maximum page size. | String | ||
The address of the management API endpoint. | String | ||
The port of the management API endpoint. | 8081 | int | |
Additional options for the register operation. | RegisterOptions | ||
Additional options for the scale-worker operation. | ScaleWorkerOptions | ||
Additional options for the unregister operation. | UnregisterOptions | ||
Model archive download url, support local file or HTTP(s) protocol. For S3, consider using pre-signed url. | String | ||
The address of the metrics API endpoint. | String | ||
Names of metrics to filter. | String | ||
The port of the metrics API endpoint. | 8082 | int | |
The token authorization key for accessing the inference API. | String | ||
The token authorization key for accessing the management API. | String |
Message Headers
The TorchServe component supports 9 message header(s), which is/are listed below:
Name | Description | Default | Type |
---|---|---|---|
CamelTorchServeModelName (producer) Constant: | The name of model. | String | |
CamelTorchServeModelVersion (producer) Constant: | The version of model. | String | |
Constant: | Model archive download url, support local file or HTTP(s) protocol. For S3, consider using pre-signed url. | String | |
CamelTorchServeRegisterOptions (producer) Constant: | Additional options for the register operation. | RegisterOptions | |
CamelTorchServeScaleWorkerOptions (producer) Constant: | Additional options for the scale-worker operation. | ScaleWorkerOptions | |
CamelTorchServeUnrsegisterOptions (producer) Constant: | Additional options for the unregister operation. | UnregisterOptions | |
CamelTorchServeListLimit (producer) Constant: | The maximum number of items to return for the list operation. When this value is present, TorchServe does not return more than the specified number of items, but it might return fewer. This value is optional. If you include a value, it must be between 1 and 1000, inclusive. If you do not include a value, it defaults to 100. | Integer | |
CamelTorchServeListNextPageToken (producer) Constant: | The token to retrieve the next set of results for the list operation. TorchServe provides the token when the response from a previous call has more results than the maximum page size. | String | |
CamelTorchServeMetricsName (producer) Constant: | Names of metrics to filter. | String |
Usage
Each API endpoint support the following operations.
Inference API
The Inference API provides the inference operations.
torchserve:inference/<operation>[?options]
Operation | Description | Options | Result |
---|---|---|---|
| Get TorchServe status. | - |
|
| Predictions entry point to get inference using a model. |
|
|
| Not supported yet. | - |
|
Management API
The Management API provides the operations to manage models at runtime.
torchserve:management/<operation>[?options]
Operation | Description | Options | Result |
---|---|---|---|
| Register a new model in TorchServe. |
|
|
| Configure number of workers for a model. This is an asynchronous call by default. Caller need to call |
|
|
| Provides detailed information about a model. If "all" is specified as version, returns the details about all the versions of the model. |
|
|
| Unregister a model from TorchServe. This is an asynchronous call by default. Caller can call |
|
|
| List registered models in TorchServe. |
|
|
| Set default version of a model. |
|
|
Examples
Inference API
from("direct:ping")
.to("torchserve:inference/ping")
.log("Status: ${body}");
from("file:data/kitten.jpg")
.to("torchserve:inference/predictions?modelName=squeezenet1_1")
.log("Result: ${body}");;
Management API
from("direct:register")
.to("torchserve:management/register?url=https://torchserve.pytorch.org/mar_files/mnist_v2.mar")
.log("Status: ${body}");
from("direct:scale-worker")
.setHeader(TorchServeConstants.SCALE_WORKER_OPTIONS,
constant(ScaleWorkerOptions.builder().minWorker(1).maxWorker(2).build()))
.to("torchserve:management/scale-worker?modelName=mnist_v2")
.log("Status: ${body}");
from("direct:describe")
.to("torchserve:management/describe?modelName=mnist_v2")
.log("${body[0]}");
from("direct:register")
.to("torchserve:management/unregister?modelName=mnist_v2")
.log("Status: ${body}");
from("direct:list")
.to("torchserve:management/list")
.log("${body.models}");
from("direct:set-default")
.to("torchserve:management/set-default?modelName=mnist_v2&modelVersion=2.0")
.log("Status: ${body}");
Spring Boot Auto-Configuration
When using torchserve with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-torchserve-starter</artifactId>
<version>x.x.x</version>
<!-- use the same version as your Camel core version -->
</dependency>
The component supports 23 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean | |
The configuration. The option is a org.apache.camel.component.torchserve.TorchServeConfiguration type. | TorchServeConfiguration | ||
Whether to enable auto configuration of the torchserve component. This is enabled by default. | Boolean | ||
Used for enabling or disabling all consumer based health checks from this component. | true | Boolean | |
Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true. | true | Boolean | |
The address of the inference API endpoint. | String | ||
The token authorization key for accessing the inference API. | String | ||
The port of the inference API endpoint. | 8080 | Integer | |
Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean | |
The maximum number of items to return for the list operation. When this value is present, TorchServe does not return more than the specified number of items, but it might return fewer. This value is optional. If you include a value, it must be between 1 and 1000, inclusive. If you do not include a value, it defaults to 100. | 100 | Integer | |
The token to retrieve the next set of results for the list operation. TorchServe provides the token when the response from a previous call has more results than the maximum page size. | String | ||
The address of the management API endpoint. | String | ||
The token authorization key for accessing the management API. | String | ||
The port of the management API endpoint. | 8081 | Integer | |
The address of the metrics API endpoint. | String | ||
Names of metrics to filter. | String | ||
The port of the metrics API endpoint. | 8082 | Integer | |
The name of model. | String | ||
The version of model. | String | ||
Additional options for the register operation. The option is a org.apache.camel.component.torchserve.client.model.RegisterOptions type. | RegisterOptions | ||
Additional options for the scale-worker operation. The option is a org.apache.camel.component.torchserve.client.model.ScaleWorkerOptions type. | ScaleWorkerOptions | ||
Additional options for the unregister operation. The option is a org.apache.camel.component.torchserve.client.model.UnregisterOptions type. | UnregisterOptions | ||
Model archive download url, support local file or HTTP(s) protocol. For S3, consider using pre-signed url. | String |