BM Trada Logo Library
Get a quote
Home Resources News

A milestone in AI Governance: UK-US agreement paves the way for global AI safety

04 April 2024
The UK-US agreement marks a pivotal milestone in the realm of AI governance, symbolising a significant leap towards global collaboration in ensuring the safe and responsible development of artificial intelligence.

AI-governance.png
In November 2023, leaders from around the world gathered for a pivotal moment at the AI Safety Summit. Among them were UK prime Minister Rishi Sunak, alongside tech minister Michelle Donelan, and notable figures like OpenAi's Sam Altman and technology billionaire Elon Musk. Their mission? To tackle head on the pressin need for reliable AI oversight.

Out of this landmark event emerged several outcomes including:

  • The Bletchley Declaration on AI safety, endorsed by 28 nations across continents including Africa, the Middle East and Asia, emphasised the critical importance of collectively managing the potential risks of AI and ensuring it is developed and deployed safely and responsbily to benefit all.
  • Leaders in AI and global governance AI understand the imperative for collaboration in testing the next generation of AI models against a range of critical national security, safety and societal risks.

What next? Global initiatives propel AI governance forward

Since then, the momentum has been palpable, with nations around the world taking significant strides. On Monday 1st April 2024, the United Kingdom and the United States signed a landmark deal to work on testing advanced AI. This historic pact signifies a transatlantic collaboration aimed at developing robust methods for evaluating the safety of AI tools and the systems that underpin them.

Around the world, echoes of this collaborative spirit are evident. China's pioneering AI laws, such as the "International Information Service Algorithmic Recommendation Management Provisions" announced in March 2022, stand as a prime example. President Joe Biden's executive order mandates AI developers to share data with the government. At the same time, the EU's groundbreaking AI Act, the first of its kind, imposes binding requirements to mitigate AI risk. Together they underscore a shared recognition of AI's transformative power and the accompanying responsibilities.

How can the world of ISO fit into this?

Amidst this whirlwind of regulatory activity stands ISO 42001, the world's first AI-specific ISO standard. Released at the end of 2023, this standard lays the foundation for ethical and transparent AI development - a beacon of assurance for consumers and stakeholders alike.

 

Visit our ISO 42001 page to enhance your understanding of this globally recognised AI-specific standard.

NQA's Final Thoughts

At its heart, this global endeavour is driven by a singular goal: to ensure that AI evolves safely, ethically, and transparently. By instilling confidence in AI products and solutions, it paves the way for the future where AI is trusted and adopted.