Policy Advocate on Digital Governance, Information Integrity, & Digital Democracy
On 18 March 2026, more than 800 participants gathered for the United Nations’ first Global AI Multistakeholder Dialogue, a preparatory consultation ahead of the inaugural Global Dialogue on AI Governance scheduled for Geneva this July. Convened by the co-chairs, with UNESCO, the International Telecommunication Union, and the Executive Office of the UN Secretary-General serving as joint Secretariat, the meeting was framed as a major step toward a more coordinated global approach to artificial intelligence.
I was honoured to participate in the 1st United Nations Global Dialogue on AI Governance in my capacity as Ambassador for Responsible Artificial Intelligence and Antimicrobial Resistance (AMR) for Bangladesh.
*That description is accurate, but incomplete.*
What unfolded was not merely a technical conversation about how to regulate emerging technology. It was a revealing political moment. It showed that the global struggle over AI governance is, at its core, a struggle over power: who gets to define the risks, who gets to shape the rules, whose harms are recognised, and whose voices remain peripheral while decisions of global consequence are being made.
Artificial intelligence is often presented as a tool of efficiency, productivity, and innovation. But governance is never only about tools. Governance is about the social order within which those tools operate. It is about who benefits, who is exposed, and who can seek a remedy when harm occurs. That is why the first UN dialogue matters. It exposed, with unusual clarity, the fault lines that will define whether global AI governance becomes a vehicle for justice or merely another polished system for managing inequality.
One theme stood out above all others: human rights law cannot remain a ceremonial attachment to AI governance. It must be the foundation.
This point is easy to endorse rhetorically and far harder to honour structurally. Everyone is now comfortable saying that AI should be “human rights-based.” But such language means very little if rights are invited into the room only after the main architecture has already been designed around other priorities, such as market growth, national competitiveness, technical scalability, or geopolitical advantage.
*If that happens, human rights do not shape governance. They simply decorate it.*
This distinction matters because AI is no longer a distant or speculative force. It is already embedded into labour markets, welfare systems, policing, education, healthcare, migration control, content moderation and public communication. It is already influencing how people are ranked, filtered, profiled, recommended to, excluded and surveilled. In that reality, rights cannot be treated as a later-stage compliance exercise. They must define the structure from the beginning.
Dignity, equality, non-discrimination, participation, transparency, accountability and access to remedy must not be inserted into AI governance after the central assumptions have been settled. They must be the assumptions.
A second major issue raised in the dialogue was the fragmentation of participation in civil society. This may sound procedural, but it is not. It goes directly to the credibility of the governance model itself.
The current global AI ecosystem is crowded with consultations, forums, expert tracks, working groups, diplomatic processes, standards initiatives and parallel policy conversations. This abundance is often celebrated as a sign of openness. In practice, however, it creates a quieter form of exclusion. The actors with resources, mobility, staff depth and institutional access are able to remain present across multiple tracks. Those without such capacity are not.
The burden falls most heavily on under-resourced organisations, especially from the Global South, that are expected to engage in highly technical, fast-moving and multilingual processes while lacking the budgets and infrastructure that wealthier institutions take for granted. The result is formal openness without substantive equity.
*This is not simply a logistical inconvenience. It is a governance failure.*
A process cannot meaningfully call itself multistakeholder if participation is distributed so unevenly that only a narrow class of well-connected actors can engage consistently. Inclusion is not achieved by multiplying forums if most affected communities do not have a realistic chance to shape outcomes across them. In such a system, fragmentation becomes a mechanism through which inequality is reproduced while procedural legitimacy is maintained.
*Then there is the most unsettling absence of all: the ordinary public.*
Governments are represented. Experts are represented. Industry is represented. Civil society networks, to some extent, are represented. But ordinary citizens, the people already living with the consequences of AI systems, remain largely absent from the room.
That absence should alarm anyone serious about democratic legitimacy.
The people most affected by AI are not hypothetical subjects of future policy. They are workers assessed by algorithmic management systems, women subjected to technology-facilitated abuse, young people shaped by recommender architectures, language communities excluded from model design, and citizens whose information environment is increasingly mediated by opaque systems they neither chose nor control. Yet their informed experiences are still weakly reflected in global governance discussions.
This is not a minor participatory gap. It is a legitimacy problem.
If AI governance continues to be built primarily around institutional, diplomatic and corporate perspectives, then it will reproduce the same top-down logic that has already undermined public trust in digital governance more broadly. Public legitimacy cannot be manufactured through technical sophistication alone. It requires governance structures that are responsive to lived experience, not just expert abstraction.
Another of the most important interventions during the dialogue concerned the role of the Global South. Here, the discussion cut through one of the most persistent habits of international technology policy: the assumption that governance models developed around the concerns of high-capacity jurisdictions can simply be transferred outward to everyone else.
*They cannot.*
The call for a global governance floor rather thThe an a ceiling was one of the most meaningful contributions of the session. It recognised that countries do not encounter AI through a single risk profile. For a handful of powerful states and corporations, the dominant concern may be frontier systems, advanced computer governance, or long-horizon safety scenarios. Those concerns are real. But for many countries in the Global South, the immediate and pressing issues are different: exploitative data practices, opaque platform enforcement, discriminatory automation, disinformation, language exclusion, lack of consumer protection, and limited access to remedy.
To pretend that all contexts can be governed by the same priorities in the same sequence is not a neutral choice. It is a political choice that privileges the worldview of the already powerful.
Global South is not a passive recipient of regulatory ideas produced elsewhere. It has distinct institutional conditions, social vulnerabilities, democratic challenges and development priorities. It must be recognised not as an audience for imported governance, but as a co-author of the global framework itself. Anything less would merely repeat older patterns of global standard-setting in which power remains concentrated while inclusion is largely symbolic.
The dialogue also laid bare a serious and urgent gap in the current governance conversation: the absence of meaningful cross-border infrastructure to identify and respond to AI-related harms.
This is astonishing given the transnational nature of AI systems. Models, platforms, automated services, and generative tools operate across jurisdictions. Harm does as well. A single system can affect people in multiple countries at once, but accountability remains fragmented across national regulators, domestic legal systems, and isolated policy frameworks. As a result, harms are often visible only in fragments. One country sees one effect, another sees a different one, and no institution has the authority or structure to assemble the full picture.
There is still no robust global mechanism for incident reporting, coordinated evidence collection, shared regulatory alerting, or rights-based cross-border response. This is not a minor technical omission. It is a structural vacuum. Without such infrastructure, global AI governance risks remaining a language of principle without the institutional means to respond when harms outpace legal systems.
*And then there is the issue too often treated as peripheral when it is, in fact, central: Technology-Facilitated Gender-Based Violence (TFGVB).*
TFGBV cannot be dismissed as a niche concern or a specialist matter for separate policy silos. It sits squarely at the centre of today’s AI harms. It is intensified by recommender systems, generative image tools, synthetic media, identity manipulation, automated abuse, weak platform enforcement, and inadequate legal remedies. It affects bodily autonomy, mental well-being, equal participation, freedom of expression, and democratic voice.
When women and girls are driven out of digital spaces through technology-enabled violence, this is not simply an online safety issue. It is a governance issue, a rights issue, and a justice issue. Any AI governance framework that fails to meaningfully integrate TFGBV is not merely incomplete. It is fundamentally flawed.
The broader lesson from the first United Nations Global AI Multistakeholder Dialogue is clear. The debate over AI governance is not only about controlling technology. It is about deciding what kind of international order will govern that technology and whose interests.
Currently, we are learning from the project titled Strengthening Resilience Against Technology-Facilitated Gender-Based Violence (TFGBV) and Promoting Digital Development.
Bangladesh NGOs Network for Radio and Communication (BNNRC) is implementing the project titled “Strengthening Resilience Against Technology-Facilitated Gender-Based Violence (TFGBV) and Promoting Digital Development,” under the Nagorikata: Civic Engagement Fund (CEF) programme. The GFA Consulting Group is providing Technical Support and is funded by Switzerland, Global Affairs Canada, and the European Union.
Will AI governance be built as a genuinely rights-based, inclusive, and accountable framework? Or will it become another diplomatic and regulatory architecture in which power speaks first, legitimacy is assumed, and those most exposed to harm are invited in only after the terms have been set?
*That is the real choice now confronting the international community.*
The danger is not only that the world will adopt weak rules. It is that it may adopt impressive-looking rules that leave underlying inequalities intact. A governance system can be globally branded, procedurally elaborate, and rhetorically inclusive, yet still reproduce the very hierarchies it claims to address. That danger is particularly acute in the AI era, where technical complexity can easily become a shield for political imbalance.
*The international community should resist that path.*
AI governance cannot be credible if it is written mainly by those closest to power and then presented as a universal settlement. It cannot claim legitimacy while ordinary citizens remain peripheral. It cannot call itself rights-based while treating human rights as secondary. It cannot speak of inclusion while structuring participation in ways that systematically privilege the well-resourced. And it cannot protect people in practice while leaving cross-border harms and gendered violence inadequately addressed.
The Bangladesh NGOs Network for Radio and Communication will submit written input ahead of the 30 April 2026 deadline. That contribution will be guided by a simple conviction: the future of AI governance must be grounded in human rights, democratic legitimacy, Global South agency, platform accountability, and meaningful protection against real-world harm.
*Because the central question is no longer whether artificial intelligence should be governed.* *It is whether governance will serve the many or merely formalize the power of the few.*
AHM. Bazlur Rahman | Specialist in Advancing Digital Democracy| MSS in Government & Politics, Bachelor of Laws (LL. B) | Chief Executive Officer| Bangladesh NGOs Network for Radio and Communication (BNNRC) | & Ambassador for Responsible Artificial Intelligence & Antimicrobial Resistance (AMR) for Bangladesh.
Policy Research Fellow | Shaping the Future of Media, Information Integrity & Society in the Era of the Fourth Industrial Revolution.








০ টি মন্তব্য