Data streaming maturity requires both robust technology and a cultural shift in organizations
We’re at a pivotal point in how businesses manage and act on data. The foundational technologies are here, Kafka, real-time pipelines, streaming architectures. They work. But knowing how to use them well is a different story. A lot of companies still treat data like it’s static. That approach doesn’t scale anymore. Data has become a product. That means it needs to be clean, contextual, reusable, and above all, dependable.
Tim Berglund from Confluent nailed the issue at their Data Streaming World event in Melbourne. He said we’re only starting to get our arms around this change. That’s true. You can have the most advanced infrastructure in place, but if your teams don’t think in terms of long-term data products, you’re going to introduce noise into your systems instead of creating clarity. Kafka, for instance, makes it easy to spin up connections between systems, but without oversight, those connections create chaos. That’s what happens when you’re missing a strategy and a culture that promotes ownership over quality.
Building this kind of mindset, one where teams shape data consciously, is what’s needed now. It’s how companies stop reacting to problems and start building systems that get ahead of them. Leaders need to support this shift with consistency and direction. Not optional. It’s a baseline for any business betting on AI, automation, and real-time decision-making in the next five years.
Without this cultural reset, real-time data is just noise at scale.
Proactive governance and security integration (“shift left”) is essential for maintaining data quality and operational efficiency
Let’s talk security and governance. Right now, a lot of companies deal with it like an afterthought. They build first, worry about controls later. That’s not sustainable. Especially as data volumes grow and real-time decision-making becomes the baseline, not the bonus.
Tim Berglund made an important point: governance has to move closer to the data source. This is about simplifying the process and cutting down on reactive firefighting later. If you define access, integrity, and version control up front, your datasets are more trustworthy across product teams, compliance officers, and AI models. It’s not just about sensing risk, it’s operational sanity.
Executives should look at this not as overhead, but as a return on resilience. When you implement access management, validation, and compliance rules at the origin, you reduce the drag on security teams and eliminate bottlenecks in product releases. Stop thinking of governance as a barrier. It’s an accelerant when used early and strategically. That’s where leverage lies.
Treating it as a cornerstone of the architecture, not a layer on top, saves time, avoids cost, and keeps your systems battle-ready. Ignore this, and every new integration becomes a liability. Set the expectation now, and you’ll clear the path for scale without surprise failures.
Confluent-enabled ecosystem tools are unifying operational and analytical environments
The real advantage in data-driven business right now is velocity, how fast teams can move from raw events to real insights. Operational data systems generate that flow in real time. Analytical systems extract value from it. Historically, these have lived in separate stacks with different teams, different rules, and different timelines. That’s inefficient. Today, that separation is dissolving.
Confluent sits at the center of this shift. Tools like Tableflow help translate Kafka topics into modern analytical formats, Iceberg, Delta Lake, the kind that simplify time-travel queries and schema evolution. It’s helping companies link real-time operations with near-real-time insights, all from a common data stream. That reduces complexity, boosts coordination across departments, and lowers the barrier to building new products from live signals.
Tim Berglund emphasized Kafka as a “universal data substrate.” That’s accurate. The underlying event stream becomes the shared fabric between transactional and analytical systems. The impact is measurable, faster feedback, better visibility, and less duplication. More importantly, it aligns people. Developers, analysts, architects, they start speaking from the same reference point instead of working against each other on mismatched timelines.
For executives, this is about compressing cycles between data creation and outcomes. Don’t wait for perfect infrastructure. Start with what aligns teams and cut time-to-insight. Unified environments are not just about tools. They’re about execution speed.
ASX is reengineering its data infrastructure to support high-volume trading and ensure seamless data connectivity across environments
If you’re running a critical, high-volume platform like the Australian Securities Exchange, failure isn’t just expensive, it’s unacceptable. Data loss, delays, or service outages can ripple out and impact markets, institutions, and national credibility. ASX understands this. That’s why they’ve overhauled their data infrastructure around Apache Kafka and Confluent tech to deliver high integrity with high volume.
Sumit Pandey, ASX’s Senior Manager for Data and Integration, laid out the goal: zero data loss, 99.95% uptime, and a recovery time objective of under two hours. That’s ambitious, and necessary. The exchange handles up to 20 million trades per day. Their new system is designed to span both cloud and on-premise environments, ensuring resilience no matter where data originates.
This is about future-proofing a core economic platform. What’s especially compelling for other decision-makers is the direct operational ROI: ASX saw a 20–30% savings in its first two years after implementing Confluent’s cluster linking capability.
When systems hit scale, traditional data integration methods break. Investing in purpose-built streaming and replication capabilities, as ASX has, sets the stage for real-time transparency and product innovation. For any organization managing critical volume and performance benchmarks, this signals a clear direction: prioritize infrastructure that scales with integrity, not just cost.
ANZ bank has employed an event-driven, low-latency architecture to enhance real-time analytics and fraud detection capabilities
Speed, when applied to the right data, changes how companies operate. ANZ Bank understands this. Their need to detect fraud in under a second isn’t aspirational, it’s operational. To meet that demand, they’ve restructured their architecture to become fully event-driven. This means processing data as it happens, not after it settles into a warehouse.
Louisa Leung, Domain Architect for Integration at ANZ, explained how this architecture improves outcome delivery. Through an event mesh spanning cloud and on-prem environments, ANZ removes the drag of point-to-point connections. Instead of custom building links between systems, they send events through a common platform, making systems more agile and response cycles faster.
But Leung was direct: “It’s not simple.” For this approach to work, teams need consistent data standards and quality test data. Without those, latency issues can go undetected until they impact customer services or compliance. Data governance, schema discipline, and clean publication strategies need to be in place, early, if the full performance benefits are going to be realized across domains.
For executives in finance or any real-time decisioning business, the message is practical. Don’t wait until latency becomes visible to fix your architecture. Build low-latency systems from the ground up if speed is part of your business model. It improves trust, security, and resilience without slowing down change.
Bendigo bank leverages internal cost signaling through advanced dashboards to promote efficient resource usage
Data infrastructure at scale can get expensive, especially when resource usage isn’t visible to the people consuming it. Bendigo Bank took the right approach by establishing a cost feedback loop before implementing formal chargebacks. That’s smart, effective leadership in action.
According to Dom Reilly, Service Owner of Databases and Middleware at Bendigo Bank, the team built a Splunk-based dashboard to track usage of schemas, storage, and managed connectors on Confluent. This dashboard sends a “cost signal” to internal teams. Once engineers saw where costs came from, behavior shifted fast, teams started deleting unused schemas, right-sizing instance usage, and forecasting better.
The data proved it: giving teams transparency and control over their own consumption drives immediate efficiency. Reilly confirmed that these behavioral changes started before the formal internal billing process, which begins in July. This means fewer surprises and smoother transitions once financial accountability is enforced.
For leaders managing multi-team technology environments, the takeaway is clear. Cost discipline doesn’t start with finance, it starts with data. Provide teams with tools that let them see their usage and impact. Combine that with metrics that track improvement over time, and you’ll drive smarter behavior without mandates. Control your cost curve, improve system hygiene, and scale more predictably.
Virgin Australia is enhancing customer experience and operational efficiency through real-time data streaming.
In industries where timing and responsiveness are critical, data needs to move at the pace of operations. Virgin Australia is doing just that. Integrating Apache Kafka with Confluent’s managed services, they’re improving how their core systems communicate, bringing real-time visibility to processes that directly influence the passenger experience.
Nick Mouton, Integration Platform Lead at Virgin Australia, explained how they’re using streaming data to optimize services like fleet tracking, baggage location, and automated rebooking. These aren’t hypothetical cases. They are core operational demands that impact both cost and customer satisfaction. By enabling teams to respond in real-time, the airline ensures faster decision-making and smoother passenger operations.
Mouton emphasized starting with a well-defined use case and proving value early. That advice cuts through complexity. You don’t need to have every architectural piece in place to gain momentum. Build where impact is immediate, scale from there. Tim Berglund from Confluent reinforced the same idea in a shared discussion, saying it’s more important to deliver value incrementally than to wait until everything is structurally perfect.
For executives in transport, logistics, or consumer services, this line of thinking is critical. Streamlining operations by using real-time data drives measurable improvements across service quality, asset usage, and customer loyalty. Move fast where the gains are clear.
Livestock improvement corporation (LIC) has unified disparate data sources into a single governed stream processing system to drive actionable insights.
LIC operates in a space that depends on precise, data-driven recommendations, in this case, agriculture and animal health. Previously, their data was fragmented: milk analysis, genetic sequencing, and biometric tracking each sat in separate systems. That slowed down insight generation and made coordination between product and engineering teams harder.
Now, with a governed streaming platform powered by Confluent, LIC has consolidated its operational and analytical pipelines into a single, integrated stream. This setup enables faster delivery of insights to farmers and removes friction across internal workflows. Data produced from cattle collars or genetic labs flows through a unified structure, supporting real-time analysis that improves breeding strategies, productivity, and health outcomes.
Vik Mohan, Principal Technologist at LIC, pointed out the non-technical part of the transformation: aligning developer and data engineer mindsets. That’s where most organizations struggle, not with the tools, but how people use them. When teams work from a shared platform, collaboration improves. Product delivery speeds up. Errors go down. That kind of internal coherence is often overlooked.
For leaders working across science, agriculture, or other complex data environments, LIC’s success shows that meaningful transformation isn’t limited to cloud-native industries. When data is unified and governed properly, teams move faster, insights become more timely, and the enterprise delivers more value with less drag.
The bottom line
Data streaming is no longer a future-facing experiment. It’s active, scalable, and already delivering results across industries. But the companies unlocking the most value aren’t just implementing new tools, they’re reshaping how teams think, operate, and take ownership of data.
This isn’t a discussion about platforms. It’s about precision, alignment, and speed. The infrastructure is ready. The missing link is often cultural. High performance comes when data is treated like a product, designed, maintained, and owned with intention. That shift separates the companies that just deploy Kafka from the ones building real-time, intelligent operations around it.
For leaders, the path forward is clear. Encourage foundational systems, but expect a change in behavior alongside them. Incentivize quality, tie cost to usage, and move governance closer to creation. Streaming data isn’t complex because of the tech, it’s complex because too few people are building with the full system in mind. That’s fixable.
Move fast, prioritize clarity, and back the teams designing with impact. The payoff isn’t just performance, it’s future agility.