Please note that if you use our service via a marketplace, the answers to some of these FAQs might not be applicable.
Apache Kafka is distributed publish-subscribe messaging systems that is designed to be fast, scalable, and durable. Apache Kafka enables you to publish, store, and process streams of records (also called messages or events) in real time. It can be used for a variety of applications, such as data integration, data streaming, website activity tracking, and log aggregation.
Kafka is open-source and written in Scala and Java.
The applications sending messages to Apache Kafka are called producers. A message can include any information; for example, information about an event on your website, or it could just be a simple text message that is supposed to trigger an event. Applications that connect to Kafka and read the data are called consumers.
In Apache Kafka, you’ll find topics and partitions.
A topic is a logical collection of partitions that makes it easier to configure and manage groups of partitions. A topic can contain anything from one to an infinite number of partitions. A partition is the actual structure on the server that Kafka writes to and manages. Each partition contains a log where all messages are written.
Consumers connect to a partition and read from it.
A complete beginner guide for Apache Kafka can be found here .
Zookeeper is a top-level software developed by Apache that acts as a centralized service, keeping track of the status of the Kafka cluster nodes and of Kafka topics, partitions, etc. All of our plans have a managed Zookeeper cluster. More information about Zookeeper can be found here .
Yes, CloudKarafka is available both through AWS and Microsoft Azure marketplace.
You should use the following ports:
Always use an encrypted connection when connecting to CloudKarafka over the public internet. Using an encrypted connection is necessary to block other connection attempts by default for security reasons.
Use a plaintext connection when connecting to CloudKarafka over a VPC peering since the connection itself is private. The benefit of this is that it requires less computing power for the broker to decrypt the data.
Use the cluster hostname, this returns the IP address for each node in the cluster. The client software will then choose an node that is available automatically.For example, if your cluster name is red-speedcar your broker list should look like this:
red-speedcar.kafka.cloudkafka.com:9094
Your client application cannot verify the certificate from the Kafka broker, which could mean three things:
First verify that you have entered the correct URL to all the brokers.
Second Make sure that your clients kafka library are up to date.
Third If your clients still does not accept the certificate, most client libraries support explicitly setting trust to individual CAs. as a last option configure you clients to trust our CA directly by downloading it and configuring your clients to trust it.
*.kafka.cloudkafka.com
domain
*.srvs.cloudkafka.com
domain (legacy) from 2023-10-02
*.srvs.cloudkafka.com
domain (legacy) between 2022-10-02 and 2023-10-02
Please note that the certificate chain that CloudKarafka uses will change over time and you will need to update this file in the future. Put the certificate file in the same folder as the application source code, and configure your Kafka client with this option:
Yes, in the CloudKarafka Control Panel you can edit the size of your cluster and add additional nodes.
Yes, you can switch between both the dedicated plans.
Retention policies
The retention.bytes configuration is on a partition granularity and not a topic. So if you have a topic with 10 partitions, then also expect the size of the topic to be 10x the size of this configuration.
Using a segment.bytes value larger than retention.bytes can break the logic of the retention as Kafka only deletes non-active segments. And by not filling up a complete segment, it will instead keep on filling the disk until the time-based retention is reached. In the same way as the prior one, having a segment lifespan much longer than the time-based retention, the events can stick for longer than expected.
We guarantee at least 99.95% availability on all dedicated plans. CloudKarafka will refund 50% of the cost of the plan for longer outages. Requests for refunds must be submitted in writing, within 30 days from the outage to which they refer, via email to contact@cloudkarafka.com
We guarantee a maximum 30 minute initial response time on critical issues correctly submitted to our support system.
Complete SLA information can be found here.
Alarms: PagerDuty, VictorOps, OpsGenie
Logging: Papertrail, Loggly, LogEntries, Splunk, Stackdriver, CloudWatch
Metrics: CloudWatch, Liberato, DataDog
Yes, for dedicated plans.
MirrorMaker is a tool for maintaining a replica of an existing Kafka cluster.
When MirrorMaker is enabled, all messages are consumed from the source cluster and re-published on the target cluster; i.e. data is read from topics in the source cluster and written to a topic with the same name in the destination cluster. This allows for the option to send data to one cluster, which in turn can be read from both clusters. MirrorMaker can run one or multiple nodes. If you as a customer have a five node cluster, you can enable MirrorMaker on one node or all five of them. A higher number of nodes means faster processing and a better rate of keeping the cluster in sync.
No discounts are available at this time.
No. There is no shipping cost since the service is shipped electronically.
No. The service is non-returnable.
You can choose to pay through credit card (due on charge date) or via wire transfers (NET15). If you would like us to enable manual invoicing via wire transfer, send us an email once you have added all information and we will enable it for you. Please note that we don’t accept checks.
The service will be provided off-premise in a data center and region chosen on behalf of the customer. The data centers and regions currently provided can be found at the bottom of this page: https://www.cloudkarafka.com/pricing.html.
Our billing is pro-rated, which means that our customers only pay for the time the service has been available to them due the month after delivery. Thus, you won’t receive your first invoice when the account has been created; instead, you will receive it at the beginning of the upcoming month.
No. Our customers often change their plan while they are using our service, therefore it’s not convenient to pay for a year upfront.
No. We don’t need any documents with signatures.
No. To safeguard customer data, active subscriptions aren’t deleted. For example, resellers that provide us with a PO for two months’ of the Happy Hippo plan are responsible for deleting the plan after the subscription expires. Otherwise, you will be charged for the extra time your data remains on the system.
It’s best to extend the current PO. If you need to have two separate PO’s for the subscriptions, you need to open a new account in order for the subscriptions to be billed separately.
Go to https://customer.cloudkarafka.com/login and enter your email address. Fill out all the information in the billing section, such as billing address, email, etc. Please note that it’s important that we have your billing information registered and not the end-user information since you are our direct customer and not the end-user.
The PO number can be specified in the billing section under “billing notes”. Or send it to us, and we will add it for you.
You are free to create and delete instances once the billing information is set up. It’s up to you and the end-user to decide who will create the subscription specified in the PO.
Invite the end-user to the account via https://customer.cloudkarafka.com/team so that he/she can start using the service.
Change the role of the person that created your account to “Billing Manager”. By doing so, you can access all invoices of the account and update the billing information. But you will not be able to edit the customer's subscription. See more information here: https://www.cloudkarafka.com/blog/manage-instance-access-acl.html
Yes. You can read more here.
No separate agreement is required. Our Data Processing Agreement (DPA) for GDPR is an exhibit to our Terms of Service. Thus, our business relationship is automatically covered by a DPA when signing up for an account.
You have the right to see what personal information 84codes AB holds about you. You are entitled to be given a description of the information, what we use it for, who we might pass it onto, and any information we might have about the source of the information.
A subject access requests should be made via email to compliance@84codes.com.
As a data controller, you decide for yourself where you want to host your data by choosing a data center and region. The data will not leave that region unless you choose to move it. In CloudKarafka’s role as data controller, we may collect and store contact information, such as email address, and physical address, when customers sign up for our services or seek support help.
Your personal customer data (email and billing information) is stored in the US.
We are proud to be compliant with SOC 2 by AICPA. We have been audited against the Security (common criteria) and Availability Trust Services Criteria.
Our SOC 2 Type 2 report can be obtained under an NDA per request. Please send an email to compliance@cloudkarafka.com.