At FutureNet Asia 2025, Vinod Joseph, Vice President for Cloud, AI and Enterprise Architecture at Singtel Group, shared a deep dive into how telcos can scale AI in a cloud native manner. His talk moved beyond the buzz around generative AI and looked at the infrastructure and operational realities required for long-term success.
Vinod positioned the industry as moving into a second phase of AI adoption. The early phase focused on agents, copilots and low-code platforms. The next phase, however, demands robust systems for managing data pipelines, training and fine-tuning models, monitoring model performance and deploying AI in production environments at scale. He stressed that this shift requires not only flexible cloud environments, but also consistent engineering practices and strong governance frameworks.
A core theme of the session was the importance of avoiding proprietary lock-in. Vinod argued that as AI workloads grow, organisations need the freedom to deploy where it makes sense, whether on-premises or on public cloud, while maintaining agility and operational consistency. Kubernetes featured strongly as a foundational platform for AI workloads, offering orchestration capabilities and portability across environments.
Three open-source frameworks were highlighted as central to Singtel’s approach. Kubeflow supports the orchestration of AI and machine learning pipelines, handling key stages from model training to promotion into production. Ray helps distribute compute workloads across GPUs and servers, enabling efficient training of large-scale models where data and model components cannot fit on a single device. MLflow, meanwhile, provides experiment tracking, model registry and deployment management, simplifying lifecycle operations and improving observability.
Vinod stressed that scaling AI requires more than computational power. Efficient data handling, reproducibility, experiment lineage and reliable recovery from failures are just as important. As organisations accelerate AI adoption, these capabilities become essential not only for performance, but for cost control. Open-source tooling, he argued, is becoming increasingly competitive and offers a viable way to balance capability with economic scale.
Singtel’s perspective reflects a growing maturity in telecom AI strategy. The focus is shifting from exploration to industrialisation, from early pilots to repeatable and governable systems. With cloud native architectures, distributed computing and open frameworks at the core, the goal is to build platforms that can scale flexibly, avoid dependency on any single vendor and support the next generation of AI-driven services.
The full presentation is available below for anyone who would like to watch it:
Related Posts:
- The 3G4G Blog: What is Cloud Native and How is it Transforming the Networks?
- Operator Watch Blog: 6 Mobile Operators in the Global 100 Ranking of the World’s Most Sustainable Companies
- Operator Watch Blog: Singtel Surpasses 95% Nationwide 5G Standalone Coverage in Singapore
- Operator Watch Blog: Singapore is a 5G leader in Southeast Asia
- Operator Watch Blog: Singtel shows the power of Standalone 5G
- Private Networks Technology Blog: Singtel’s 5G Powers the Transformation of Singapore’s Tuas Mega Port
- Private Networks Technology Blog: 5G Private Networks Driving Industry Transformation in APAC
- Operator Watch Blog: Cloud Native Progress and Pain Points According to Orange
- The 3G4G Blog: Cloud Native Telco Transformation Insights from T-Systems
- Operator Watch Blog: How AI Is Reshaping Network Operations at Deutsche Telekom
- Operator Watch Blog: Singtel Surpasses 95% Nationwide 5G Standalone Coverage in Singapore

No comments:
Post a Comment