Confluent unveils next generation of Apache Kafka


Confluent, founded by the creators of Apache Kafka, recently announced the release of open source Confluent Platform 2.0, based on an updated Apache Kafka 0.9 core.

Representing a big leap forward in the maturation of the technology, the new release boasts fresh features to enable secure multi-tenant operations, simplify development and maintenance of applications that produce or consume data in Kafka, and provide high-throughput, scalable data integration with a wide array of data sources.

These new capabilities are designed to meet the needs of a growing wave of enterprise customers.

Created by Apache Kafka committers from Confluent, LinkedIn and other members of the vibrant Kafka community, Kafka 0.9 provides the critical foundation for the next adoption wave for Apache Kafka. 

Kafka is already a wildly popular open source system for managing real-time streams of data from websites, applications and sensors that is now being used as fundamental infrastructure by thousands of companies, ranging from LinkedIn, Netflix and Uber, to Cisco and Goldman Sachs.

Confluent Platform is built upon Apache Kafka to solve the challenges of data integration by providing a central stream data pipeline. It turns an organisation's data into readily available low-latency streams, and further acts as a buffer between systems that might produce or consume such data at different rates.

"Our industry is in the middle of a transition away from slow, batch data integration to real-time streams that make data available in milliseconds," said Jay Kreps, CEO and co-founder of Confluent, and co-creator of Kafka.

"Confluent Platform makes it easy for customers to embrace streaming data as a platform for data integration, real-time analytics and streaming data applications."

The founders of Confluent created Kafka while at LinkedIn to help cope with the very large-scale data ingestion and processing requirements of the business networking service.

As Fortune 500 companies tap into data streams for system monitoring, gather sensor data, and look to gain real-time business and customer insights, demand is soaring.
Kafka adoption has grown 7x since January 2015. The updates in Confluent Platform 2.0 reflect these needs and provide an enriched platform experience for Kafka developers.

Confluent Platform 2.0 boasts new features to address the needs of large enterprises that must handle the highly sensitive personal, financial or medical data, and operate multi-tenant environments with strict quality of service standards. Details include:
·         Data Encryption: Ensures encryption over the wire using SSL.
·         Authentication and Authorisation: Allows access control with permissions that can be set on a per-user or per-application basis.
·         Quality of Service: Configurable quotas that allow throttling of reads and writes to ensure quality of service.
·         Kafka Connect: A new connector-driven data integration feature facilitates large-scale, real-time data import and export for Kafka, enabling developers to easily integrate various data sources with Kafka without writing code, and boasting a 24/7 production environment including automatic fault-tolerance, transparent, high-capacity scale-out, and centralized management -- meaning more efficient use of IT staff.

With Kafka, developers can build a centralised data pipeline enabling microservices or enterprise data integration, such as high-capacity ingestion routes for Apache Hadoop or traditional data warehouses, or as a foundation for advanced stream processing using engines like Apache Spark, Storm or Samza. 

You might also like
Most comment
share us your thought

0 Comment Log in or register to post comments