Value Chain Asia MagazineAll ArticlesBusiness and EconomyEventGeopoliticsHuman ResourcesLeader In Supply ChainLogisticsOpinionsPress ReleasesSupply Chain and ManufacturingSustainabilityTechnologyThe New Hubs: Logistics Real Estate & Warehousing in 2026Cleared for Takeoff: Aerospace & Aviation Supply ChainsConstructing the Future: Pioneers in Infrastructure DevelopmentConsumer Currents: Exploring the Evolution of Consumer Fast Moving GoodsFarm to Table: Innovations in Food and AgricultureHealth in Focus: Innovation in Medicine and PharmaceuticalsElectrifying Advances: The Next Wave of Innovations in TechThe Next Chapter in Retail and E-commerce
Technology

AI bias in supplier selection challenges fair procurement across Asia

30 Apr 20269 min read
AI bias in supplier selection challenges fair procurement across Asia

Summary

  • AI-driven supplier selection is becoming standard across Asia, improving speed and efficiency but also embedding historical and design biases that can disadvantage emerging-market and minority suppliers.
  • Because most procurement AI operates as a “black box,” companies often cannot trace or justify decisions. This lack of transparency makes biased outcomes harder to detect and correct.
  • Governments are responding at different speeds—from voluntary frameworks in some markets to stricter transparency and compliance regimes in others—while firms increasingly rely on tools like fairness audits and explainability models to manage risk.
Companies across Asia are turning procurement decisions over to machines. Algorithms now shortlist suppliers, score bids and recommend vendors with very little human oversight, all in the name of speed, efficiency and “objective” decision-making. For procurement teams under pressure to cut costs and move faster, the appeal is obvious.

The uptake has been rapid. According to a US-focused 2024 survey by AI at Wharton, 94% of procurement executives now use generative AI at least once a week, with weekly usage increasing 44 percentage points from 2023 to 2024. A Deloitte survey of over 100 Chief Procurement Officers from Europe, North America and Asia-Pacific found that 92% are exploring GenAI capabilities, with 22% planning to invest over $1 million by 2025. In APAC specifically, the KPMG 2025 CEO Outlook found that 67% of regional CEOs cite generative AI as a top investment priority, with 87% expecting returns within three years.

But this rush toward automation brings a problem nobody wanted: AI systems that discriminate

The bias problem

AI tools trained on historical supplier data often favour businesses from developed regions over those in emerging markets, systematically excluding minority-owned businesses and limiting supply chain diversity. The algorithms learn from past human decisions. And, if those decisions reflected bias, the AI will too.

This problem is not theoretical. While the most prominent cases involve hiring rather than procurement, the underlying mechanism is identical, viz., biased data produces biased outcomes, at scale. Amazon famously scrapped an experimental AI recruitment tool after discovering it systematically disadvantaged women, penalising CVs that referenced women’s colleges or activities.

Closer to home, an AI-based job recommendation system in Indonesia unintentionally excluded women from certain job opportunities because of historical gender imbalances embedded in its training data. The case highlights a pattern identified by UN Women and the UN University of Macau, which found that four types of gender biases in AI, namely discrimination, stereotyping, exclusion and insecurity, remain prevalent across Southeast Asia.

 Asia’s linguistic, cultural and economic diversity is often poorly represented in models trained primarily on Western datasets, leading to blunt assumptions and skewed scoring. Research published in PNAS Nexus confirms this Western cultural bias, showing that AI model outputs favour self-expression values commonly found in ‘English-speaking and Protestant European countries’. Another example: agricultural AI tools and healthcare systems trained on Western data frequently fail in Southeast Asian contexts because they are not adapted to the region’s unique conditions and practices.

Of course, it is commonly established that when training data lacks diversity, data bias happens and skews recommendations toward established suppliers. Design bias, on the other hand, occurs when AI models optimize for cost and efficiency while overlooking sustainability and ethical sourcing.

Feedback loops emerge when suppliers are repeatedly overlooked and struggle to build performance records, making future selection even harder.

Consider how this plays out in practice. Research on AI hiring systems in Southeast Asia found that when evaluating candidates with identical qualifications, AI models showed scoring disparities based on geography, education background, and previous company prestige. These are factors that often disadvantage candidates from developing economies. Similar dynamics affect supplier selection: vendors from emerging markets may receive lower scores simply because historical data favours established suppliers from developed regions.

The mechanism is simple enough. Suppliers with non-Western names might receive lower initial screening scores. Companies from countries with lower GDP per capita face higher documentation requirements. Vendors without polished English-language websites are screened out before substance is assessed.

When nobody can explain the decision (Black Box?)

Many companies hesitate to implement explainability mechanisms, fearing exposure of proprietary decision-making models. This tension between competitive secrecy and ethical AI governance leaves regulators struggling to enforce transparency.
"Transparency is vital to building trust in AI-driven procurement processes. It means procurement teams (and their stakeholders) should be able to understand how an AI tool is arriving at its outputs or recommendations," writes Philip Ideson on the Art of Procurement blog. "AI should be explainable rather than a mysterious source of answers."
A global survey reported by FutureCFO, conducted by Workday across 2,355 business leaders found that 43% of CEOs express concerns about the trustworthiness of AI and machine learning systems, while 67% cite potential errors as a top risk. More recently, the 2025 KPMG and University of Melbourne study of over 48,000 respondents from 47 countries, including Singapore, China, India, Japan, and South Korea, confirmed that trust remains a critical challenge, with only “46% of people globally willing to trust AI systems”. The ‘black box’ nature of AI systems raises significant concerns across the C-suite, from operational efficiency to supplier relationships.

Some companies are starting to address this opacity. Singapore’s AI Verify toolkit, developed by the Infocomm Media Development Authority, helps organisations validate the performance of their AI systems against internationally recognised principles through standardised tests.

The toolkit operates through a combination of technical tests and process checks. Users are guided through a testing process that includes a “guided fairness tree” to identify fairness metrics relevant to their specific use case. At the end, AI Verify produces a summary report that helps system developers interpret test results and demonstrate transparency to stakeholders.

Companies including AWS, DBS Bank, Google, Meta, Microsoft, Singapore Airlines, and Standard Chartered Bank participated in developing and testing the framework. The AI Verify Foundation, established in June 2023, now includes premier members like Google, IBM, Microsoft, Red Hat, and Salesforce working to advance AI testing capabilities globally.

As another example, banks such as DBS have integrated LIME (Local Interpretable Model-agnostic Explanations) techniques into credit risk assessment to create transparent loan approval systems.

Similar efforts are emerging across the region. South Korea’s financial institutions are developing fairness indicators to assess AI-enabled services, with audits including fairness tests as part of regulatory compliance. Japan has taken a lighter, principle-based approach to AI governance while investing heavily in AI capabilities; here, 91% of Japanese executives expect to commit more than 10% of their budgets to AI technology, partly to address workforce challenges through automation.

The regulatory response

Governments are beginning to respond, though approaches vary widely across the region.

The ASEAN Guide on AI Governance and Ethics recommends that deployers who procure AI systems from third-party developers should appropriately govern their relationships with these developers through contracts that allocate liability in a manner agreed between parties.

Singapore maintains its characteristic preference for voluntary frameworks over hard regulation. The Cybersecurity Agency of Singapore released Guidelines and Companion Guide on Securing AI Systems in October 2024, outlining guidance for AI system owners to adopt at every stage from design to disposal. As mentioned in the last section, Singapore launched AI Verify in May 2022, a self-assessment AI governance testing framework and toolkit that validates AI system performance against internationally recognized principles through standardised tests.

In this regard, countries such as South Korea are taking a more structured approach. The AI Framework Act was passed on 26 December 2024, imposing transparency obligations for high-impact and generative AI. Violations of administrative orders to rectify transparency violations will result in administrative fines of up to KRW30 million. Financial institutions are preparing fairness indicators to assess the fairness of their AI-enabled services, with audits including fairness tests.

China’s regulations focus on data quality and content safety. China’s 2024 Basic Requirements for the Security of Generative Artificial Intelligence Services list discriminatory content as one of 31 safety risks, with requirements structured to address discrimination from multiple stakeholder perspectives. The measures set out obligations on generative AI service providers regarding content moderation, training data requirements, labeling of AI-generated content, data protection protocols and safeguarding user rights.

So, what happens next?

AI adoption across Asia Pacific is accelerating regardless. IDC expects regional AI spending to reach US$175 billion by 2028, growing at over 33 per cent annually. As a BCG survey of over 4,500 employees across nine APAC markets shows, adoption rates are already high: India leads the region with a 92% adoption rate, followed by Indonesia at 89% and China at 87%. Optimism about AI is highest in China (70%), Indonesia (69%), and Malaysia (68%). Overall, 78% of APAC employees now use AI at least weekly, compared to 72% globally.      

Walking away from AI procurement altogether is not realistic.  In one McKinsey case study, a chemicals company deploying AI agents across its sourcing workflow saw procurement staff efficiency improve by 20 to 30 per cent, with value capture rising by up to three per cent. 

However, widespread adoption remains slow due to cost concerns and industry resistance.

When using generative AI in procurement, companies should train their AI systems on diverse data that includes all kinds of suppliers instead of sticking to the usual options. “With AI infrastructure, businesses can use data analytics and machine learning to make more informed decisions,” notes Jennifer Moceri, Vice President of Global Procurement and CPO at Google, in comments reported by Veridion. Experts recommend ensuring AI models are trained on broad, representative datasets that include small, diverse, and underrepresented suppliers rather than just large, established vendors. Regular bias checks are essential, like routine health check-ups for AI systems.

The financial impact cuts both ways. Buying companies miss out on competitive suppliers. Excluded vendors lose revenue opportunities through no fault of their own. If AI procurement systems systematically disadvantage suppliers from developing economies, they could slow economic development across the region. Research on algorithmic bias in global supply chains warns that if algorithms consistently favour suppliers in developed economies due to data biases or design flaws, “businesses in developing nations, who often bear the brunt of environmental and social challenges, will be further marginalized.” This algorithmic disadvantage, as it is found in another study, particularly affects small and medium-sized enterprises which are “often crucial drivers of local economies and sustainable practices” but find themselves “unable to compete on a level playing field against algorithmically favored incumbents.”      
Asia’s supply chains are already complex, shaped by trade tensions and infrastructure gaps. Adding algorithmic bias makes them harder to navigate and potentially less fair. However, perhaps a little frustratingly, companies do have some of the tools to address this. 

Whether they’ll use them, or wait until regulations force their hand, remains unclear.
AI in procurement: bias risks in supplier selection | Value Chain Asia