diff --git a/Home.md b/Home.md index 93a012c..94dd066 100644 --- a/Home.md +++ b/Home.md @@ -1,21 +1,37 @@ -# Deploy the Project on CamelX Platform +# Kafka Batch Not Secured Source -Deploy on CamelX Platform in three steps +Receive data from Kafka topics in batch on an insecure broker and commit them manually through KafkaManualCommit. -## Step 1: Create a release -From the project space, click on **"Create a release"** +## Metadata -The new version is automatically available in the list +| Property | Value | +|----------|-------| +| Type | source | +| Group | Kafka | +| Namespace | Kafka | +| Support Level | Stable | +| Provider | Apache Software Foundation | -## Step 2: Deploy -Click on **"Deploy"** +## Properties -- **Version:** Select the desired release -- **Environment:** Choose `Development`, `Staging`, or `Production` -- **Configuration:** Select the configuration source -- **Resources:** Set CPU and Memory +| Name | Type | Required | Default | Description | +|------|------|----------|---------|-------------| +| `topic` | string | ✓ | | Comma separated list of Kafka topic names | +| `bootstrapServers` | string | ✓ | | Comma separated list of Kafka Broker URLs | +| `autoCommitEnable` | boolean | | `true` | If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer | +| `allowManualCommit` | boolean | | `false` | Whether to allow doing manual commits | +| `pollOnError` | string | | `ERROR_HANDLER` | What to do if kafka threw an exception while polling for new messages. There are 5 enums and the value can be one of DISCARD, ERROR_HANDLER, RECONNECT, RETRY, STOP | +| `autoOffsetReset` | string | | `latest` | What to do when there is no initial offset. There are 3 enums and the value can be one of latest, earliest, none | +| `consumerGroup` | string | | | A string that uniquely identifies the group of consumers to which this source belongs | +| `deserializeHeaders` | boolean | | `true` | When enabled the Kamelet source will deserialize all message headers to String representation. | +| `batchSize` | int | | `500` | The maximum number of records returned in a single call to poll() | +| `pollTimeout` | int | | `5000` | The timeout used when polling the KafkaConsumer | +| `maxPollIntervalMs` | int | | | The maximum delay between invocations of poll() when using consumer group management | +| `batchingIntervalMs` | int | | | In consumer batching mode, then this option is specifying a time in millis, to trigger batch completion eager when the current batch size has not reached the maximum size defined by maxPollRecords. Notice the trigger is not exact at the given interval, as this can only happen between kafka polls (see pollTimeoutMs option). | +| `topicIsPattern` | boolean | | `false` | Whether the topic is a pattern (regular expression). This can be used to subscribe to dynamic number of topics matching the pattern. | -## Step 3: Expose -Enable **"Expose"** +## Dependencies -Choose an **API Gateway** (Internal, Public, etc.) +- `camel:kafka` +- `camel:core` +- `camel:kamelet`