ADVERTISEMENT
Sanctions
01:35 AM 8th April 2026 GMT+00:00
Singapore’s AI Ambitions Clash with Cloud Hesitancy
Analysis by Bradley Maclean
ADVERTISEMENT
A private roundtable of senior AML leaders revealed that despite regulatory encouragement, internal cultural and technical barriers are impacting AI adoption in Singapore.
Financial institutions in Singapore are grappling with a contradiction. A strong regulatory push and internal ambition to leverage artificial intelligence (AI) are being hindered by an institutional reluctance to embrace cloud computing, a foundational technology for scaling AI.
This was a key theme that emerged from a private roundtable discussion held in Singapore on 11 March 2026, hosted by Regulation Asia and Fenergo, a firm specialising in client lifecycle management technology for financial institutions.
The session, which brought together senior financial crime, compliance, and technology leaders from global and domestic banks, revealed that while the promise of AI to transform client lifecycle management (CLM) is well understood, progress is being held back by legacy infrastructure, data quality issues, and a pervasive, risk-averse culture around technology adoption.
The cloud conundrum
A major point of discussion centred on what Cengiz Kiamil, Managing Director for APAC at Fenergo, described as a "contradiction" between Singapore’s stated position as a leader in AI and the practical realities of cloud adoption within banks.
“In order to use AI at scale, you need compute and you need cloud,” Kiamil said, contrasting the situation in Singapore with other markets. “In the Philippines, every bank is using the cloud. Banks in Australia are also on cloud for these types of processes.”
The participants agreed that this hesitancy is not being driven by regulators. According to a senior compliance leader from a major bank, the Monetary Authority of Singapore (MAS) may have been wary of cloud several years ago, but today the regulator’s position has reversed. “If I speak to MAS about cloud, they ask you, ‘Why not?’” he said.
Another participant added, "It was so much easier to do the pitch to MAS and get their acceptance" for a new AI-driven transaction monitoring system than it was to convince internal staff.
The barriers are overwhelmingly now seen as internal. One banking professional described Singapore’s financial sector as a "victim of our own success”. He argued that a highly-regulated environment, developed over years to manage cybersecurity, data protection, and system resilience risks, has created an entrenched culture within financial institutions that acts as a roadblock to innovation.
“Singapore, because of its own success, has developed a whole generation of experts protecting our system,” he explained. “Even if the regulator were to come out and say, ‘I encourage you guys, jump out of your box, start thinking cloud’, you will still have to change the mindset of a lot of people. It is almost like a culture, like a religion. It’s very difficult.”
Several participants also pointed to other practical challenges. For instance, for firms that have already made significant investments in on-premise capabilities, there is often a need to amortise those costs.
One participant described a nearly one-year wait to procure a server to increase capacity for an LLM project, a process that required navigating several levels of internal clearance. Another participant suggested that a desire for control among internal technology leaders could also be a factor.
The path forward, some suggested, lies in “iterative” adoption. “Let's start small, perhaps with one business unit, to showcase what benefits that cloud service might bring,” said one participant. “Spend a lot of time explaining the security around cloud, and then slowly show it to the mothership. If you try to make the entire bank move to cloud at once, it will never happen.”
Data foundation
This technological hesitancy exists alongside significant pressure on existing CLM operating models.
Fenergo’s Kiamil pointed to data showing that the cost of onboarding and KYC in Singapore is the highest in the world, as is the client abandonment rate, which stands at around 60 percent. “Fundamentally, the structure that coordinates and facilitates these activities is not effective,” he said.
A core problem is that CLM frameworks have evolved into a "layer cake”, with new products, jurisdictions, and policies continually added over time. "And then when you try to layer AI on the front of it, you end up with a break in the data model, a break in architecture, and an incoherent application of policy,” Kiamil said.
The foundational challenge, echoed by multiple participants, is poor data quality. “You've got to have clean data at the start. And I feel like that's the fundamental problem with a lot of older institutions,” said a sanctions expert at the discussion.
She highlighted the struggle with inconsistent data tagging across different business lines, such as using "Korea North" in one system and "DPRK" in another. “You can't even get consistent tagging, and that's assuming that you actually have complete data, let alone whether it is clean.”
Without addressing this foundational data issue, layering on advanced technologies like AI becomes exponentially more difficult. A data science expert from a Singapore bank noted that while his central data group has fewer issues aligning data, the key challenge is now accountability, especially as AI becomes more autonomous.
AI in practice
Despite the challenges, institutions are actively experimenting with AI, largely focusing on what Fenergo categorises as "Gen 1" applications: using AI for automation of high-volume, low-complexity tasks.
One participant described a "Gen 1.5" proof-of-concept aimed at detecting fraudulent documents, a key risk highlighted by the so-called “billion-dollar money laundering case” in 2023.
He explained that his bank’s models have evolved from simply detecting layout and grammatical errors in documents to performing sophisticated ratio analysis benchmarked against different industries.
“The bad guys are getting very good. They’re coming out with fraudulent documents that will appear ‘good to go’ to the human eye,” he said, noting that many of the entities involved in the 2023 case were initially classified as low-risk.
As criminals adapt, so must the technology. “In just a few months’ time, we are able to identify fewer and fewer documents with layout or grammar problems. And we discovered that the bad actors now know that the banks have a means to detect all this, and they’ve responded by creating documents that appear much more professional.
Other use cases discussed included moving beyond traditional rule-based transaction monitoring to a more "contextual surveillance" that understands a customer’s typical counterparties and trading patterns, and using machine learning to detect fraud, waste, and abuse in the insurance sector.
However, moving towards more complex "Gen 2" AI – involving complex decision-making and creativity – remains a high-risk proposition. Participants expressed concern over the "black box" nature of some AI models and the critical need for explainability and auditability.
“I remember many years ago, MAS was telling me they don’t want black boxes and they would hold me accountable regardless of the outcome,” recounted a senior banking executive.
He questioned whether this was the right message to send to the industry at the time, arguing that firms must have the "courage and the conviction" to adopt AI, accepting that while it may not be perfect, the overall outcome will be better.
Path forward
Given the difficulty of securing buy-in for large, multi-year transformation programmes, the consensus was that the most viable path forward is an iterative one.
“I'm not a fan of these multi-year, big, expensive transformation programmes,” said Kiamil, noting that the tenure of “senior people who care about it is often too short to see it through”. He advocated an approach of “think big, start small, and deliver quickly”, aiming for initial value creation within four to six months to build momentum and secure stakeholder buy-in.
A representative from a large insurance group shared a similar experience, describing how his team piloted an AI solution for detecting fraud in one market initially: Indonesia. After demonstrating clear returns, the project generated significant interest from other markets, allowing it to scale.
However, another participant warned that a major challenge is that digital proposals are often commercially driven without fully integrating RegTech or risk-related controls from the outset.
“What people have not seen is that the financial crime environment is so rapidly changing that you will also need to have your risk controls which are digitally embedded to also be agile enough for tweaking,” she said. “Today this is all planned in a disintegrated and incohesive way. It's all going in bits and pieces and blocks. So it just doesn’t always come together easily.”
Ultimately, the participants agreed that a fundamental shift is needed to view CLM not as a series of siloed compliance obligations, but as a critical, end-to-end domain that is core to the business.
--
Disclosure: This roundtable and article were produced by Regulation Asia in collaboration with Fenergo, which specialises in digital client lifecycle management and KYC technology for financial institutions.







