From 991a0f1a13d55800621eee25160714573a80312b Mon Sep 17 00:00:00 2001 From: gitea_admin Date: Wed, 11 Mar 2026 14:37:58 +0000 Subject: [PATCH] Update wiki Home page for kafka-batch-azure-schema-registry-source --- Home.md | 54 ++++++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 40 insertions(+), 14 deletions(-) diff --git a/Home.md b/Home.md index 93a012c..0d7b6c0 100644 --- a/Home.md +++ b/Home.md @@ -1,21 +1,47 @@ -# Deploy the Project on CamelX Platform +# Azure Kafka Batch through Eventhubs with Azure Schema Registry Source -Deploy on CamelX Platform in three steps +Receive data from Kafka topics in batch on Azure Eventhubs combined with Azure Schema Registry and commit them manually through KafkaManualCommit or auto commit. -## Step 1: Create a release -From the project space, click on **"Create a release"** +## Metadata -The new version is automatically available in the list +| Property | Value | +|----------|-------| +| Type | source | +| Group | Kafka | +| Namespace | Kafka | +| Support Level | Preview | +| Provider | Apache Software Foundation | -## Step 2: Deploy -Click on **"Deploy"** +## Properties -- **Version:** Select the desired release -- **Environment:** Choose `Development`, `Staging`, or `Production` -- **Configuration:** Select the configuration source -- **Resources:** Set CPU and Memory +| Name | Type | Required | Default | Description | +|------|------|----------|---------|-------------| +| `topic` | string | ✓ | | Comma separated list of Kafka topic names | +| `bootstrapServers` | string | ✓ | | Comma separated list of Kafka Broker URLs | +| `securityProtocol` | string | | `SASL_SSL` | Protocol used to communicate with brokers. SASL_PLAINTEXT, PLAINTEXT, SASL_SSL and SSL are supported | +| `saslMechanism` | string | | `PLAIN` | The Simple Authentication and Security Layer (SASL) Mechanism used. | +| `password` | string | ✓ | | Password to authenticate to kafka | +| `autoCommitEnable` | boolean | | `true` | If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer | +| `allowManualCommit` | boolean | | `false` | Whether to allow doing manual commits | +| `pollOnError` | string | | `ERROR_HANDLER` | What to do if kafka threw an exception while polling for new messages. There are 5 enums and the value can be one of DISCARD, ERROR_HANDLER, RECONNECT, RETRY, STOP | +| `autoOffsetReset` | string | | `latest` | What to do when there is no initial offset. There are 3 enums and the value can be one of latest, earliest, none | +| `consumerGroup` | string | | | A string that uniquely identifies the group of consumers to which this source belongs | +| `deserializeHeaders` | boolean | | `true` | When enabled the Kamelet source will deserialize all message headers to String representation. | +| `valueDeserializer` | string | | `com.microsoft.azure.schemaregistry.kafka.avro.KafkaAvroDeserializer` | Deserializer class for value that implements the Deserializer interface. | +| `azureRegistryUrl` | string | ✓ | | The Apicurio Schema Registry URL | +| `specificAvroValueType` | string | | | The Specific Type Avro will have to deal with | +| `batchSize` | int | | `500` | The maximum number of records returned in a single call to poll() | +| `pollTimeout` | int | | `5000` | The timeout used when polling the KafkaConsumer | +| `maxPollIntervalMs` | int | | | The maximum delay between invocations of poll() when using consumer group management | +| `batchingIntervalMs` | int | | | In consumer batching mode, then this option is specifying a time in millis, to trigger batch completion eager when the current batch size has not reached the maximum size defined by maxPollRecords. Notice the trigger is not exact at the given interval, as this can only happen between kafka polls (see pollTimeoutMs option). | +| `topicIsPattern` | boolean | | `false` | Whether the topic is a pattern (regular expression). This can be used to subscribe to dynamic number of topics matching the pattern. | -## Step 3: Expose -Enable **"Expose"** +## Dependencies -Choose an **API Gateway** (Internal, Public, etc.) +- `camel:kafka` +- `camel:core` +- `camel:kamelet` +- `camel:azure-schema-registry` +- `mvn:com.microsoft.azure:azure-schemaregistry-kafka-avro:1.1.1` +- `mvn:com.azure:azure-data-schemaregistry-apacheavro:1.1.23` +- `mvn:com.azure:azure-identity:1.15.0`