Your Call is Being Analysed

Cloud Contact Centre as a Service (CCaaS) has become a core part of how organisations in New Zealand interact with customers, bringing together voice, messaging, and digital channels into a single cloud environment. Increasingly, these platforms are powered by artificial intelligence, which promises efficiency and improved customer experience. At the same time, AI introduces a different class of privacy risk, one that is less about simply holding information, and more about continuously analysing, inferring, and reshaping it in ways that are often invisible to the individual.

What makes AI in CCaaS particularly sensitive is that it transforms ordinary interactions into rich datasets about people. A customer call is no longer just a conversation; it can be transcribed, analysed for sentiment, assessed for intent, and used to predict behaviour. These systems can generate new information, such as whether a person sounds stressed, confused, or potentially vulnerable, without the individual ever explicitly providing that information. In a New Zealand context, this raises important questions under the Privacy Act 2020 and the Biometrics Privacy Code, particularly around whether the collection and use of such inferred data is necessary and within reasonable expectations.

The Office of the Privacy Commissioner has been increasingly clear that transparency is not just about notifying people that information is being collected, but about ensuring they understand what is actually happening to their data. In the context of AI, this becomes more challenging. A standard message that a call “may be recorded for training purposes” does not adequately describe real-time sentiment analysis, automated decision-support tools, or the potential creation of biometric identifiers such as voiceprints. The Privacy Commissioner’s guidance emphasises that individuals should be able to understand not only that their information is being collected, but also the nature of any analysis or secondary use. Where AI is involved, that expectation is harder to meet, and the risk of falling short is higher.

There is also a risk that AI-driven CCaaS systems quietly expand the scope of data use over time. Information collected to resolve a customer query may later be used to train machine learning models, improve products, or inform marketing strategies. Because AI systems derive value from large and diverse datasets, there is a strong incentive to retain and reuse information beyond its original purpose. This creates tension with the purpose limitation principle. The more data is repurposed, the less likely it is that the individual would reasonably expect those uses, particularly when they involve automated analysis rather than direct human handling.

Recent developments in the New Zealand market highlight how quickly these capabilities are evolving. For example, a bank has introduced a new CCaaS platform as part of its customer service transformation, incorporating cloud-based infrastructure and enhanced digital capabilities. While such systems can improve responsiveness and consistency, they also demonstrate how large volumes of customer interactions can be brought into environments where AI tools are readily applied at scale. In a banking context this is particularly risky as conversations may include sensitive financial information. The combination of detailed interaction data and advanced analytics creates a setting where privacy risks are not just theoretical but operational.

Another concern is that AI systems may influence outcomes in ways that are not obvious. Tools that assess sentiment or prioritise calls can affect how customers are treated, which issues are escalated, or how quickly responses are provided. If these systems are not well understood or carefully managed, they may introduce bias or lead to inconsistent experiences. In New Zealand, where fairness and equitable treatment are key considerations, this becomes more than a technical issue. The Privacy Commissioner has signalled that organisations remain responsible for decisions made with the assistance of automated systems, even where those systems are provided by third-party vendors.

A further challenge is the opacity of many AI tools embedded in CCaaS platforms. These systems are often proprietary, and organisations may have limited visibility into how they operate or how decisions are reached. This creates difficulty in meeting transparency obligations, particularly if a customer seeks to understand how their information has been used. If an organisation cannot explain its use of personal information, it is unlikely to meet its obligations. In practice, this means that adopting AI-driven CCaaS solutions requires not just technical implementation, but a clear understanding of how those tools work and how they affect individuals.

Ultimately, the privacy risks of AI in CCaaS are less about the existence of the technology and more about the gap between what organisations do with data and what individuals expect. As systems become more sophisticated, the challenge will be ensuring that the use of AI remains transparent, proportionate, and aligned with what customers would reasonably expect when they pick up the phone or start a chat.

Next
Next

Controller or Processor: Decide Early