Is AI the 21st Century’s ‘Space Race’?
18 Mar 2025
Michael Rose of DRD Partnership and Jacob Turner of Fountain Court Chambers examine the legal, communications and reputational challenges associated with the so called ‘AI space race’.
The sudden emergence of the Chinese built DeepSeek chatbot in January 2025 sent shockwaves through the global AI community. Developed for a fraction of the cost of better known, U.S developed rivals such as ChatGPT and Google Gemini, DeepSeek’s startling emergence drew comparisons to the Soviet launch of the world’s first satellite, Sputnik, in 1957.
Whether the comparisons neatly fit or not, it certainly feels like an AI arms race is well underway. The U.S. and China are cast as the two superpowers up against each other to dominate a technology the consequences and impact of which are still not properly apparent. Much like space exploration, AI’s potential is well understood, and seemingly limitless, but the long-term impact remains a prominent unknown.
Unlike the early days of the space race, corporations are driving AI development forward, rather than governments. This doesn’t mean that governments, particularly in the form of the U.S and Chinese administrations, freshly at odds since President Trumps return to the White House, are not taking a great interest in ‘beating their rivals’ in developing the technology. The rewards for those that get ahead in the great AI race are potentially massive.
So, what does this all mean for companies and entities operating in the AI space, or incorporating AI technology into their day-to-day operations? A newfound sense of urgency has exacerbated a feeling amongst many companies, organisations and individuals of needing to keep up with the frenetic pace of change.
Those companies looking to tap into this new rush to AI need to particularly consider how they communicate this change. A rush to implement new AI systems, without proper explanation, is likely to trigger concern and speculation. It can’t be seen as ‘change for the sake of change’ simply because of newspaper front pages. Setting out, especially to those affected, the reasons ‘why’ can help inoculate against potential unrest and reputational damage.
Alongside major developments in the underlying technology, regulation to control and constrain its use is also being created, albeit not quite at the same pace. Organisations looking to adopt AI will need to stay equally on top of risks arising from the current and future direction of legislation as they are of the opportunities created by the technology.
At the beginning of the original space race and despite it being the height of Cold War, in 1966 the UK, US and USSR came together to agree the Outer Space Treaty, which set basic rules around what countries could do in space. It remains to be seen whether such international agreement will be possible for AI.
Just as there is competition between different countries to lead in the development of the best AI systems, there is also competition to dominate AI regulation. China was an early starter, announcing in its 2017 AI Action Plan that it intended by 2025 to become a world leader in regulating AI, and became the first country to enact a law dedicated to responsibility for generative AI in 2023. After a slightly slower start, the EU enacted the wide-ranging and comprehensive AI Act in mid-2024.
Unlike China and the EU, in the past five years the US and UK have oscillated between favouring regulation and opposing it. Notably, one of President Trump’s first actions on entering the White House in January 2025 was to repeal an Executive Order of President Biden which had mandated significant controls on the most powerful AI systems.
Then, in February 2025, the US and UK were the only two major countries to refuse to sign the final statement at the Paris AI Action Summit, a ‘Davos’ type conference attended by leading AI companies and countries. One of their main objections appears to have been to references to AI needing to be ‘sustainable and inclusive’ – terms which the current US administration might regard as empty and overly-‘woke’.
"Just as there is competition between different countries to lead in the development of the best AI systems, there is also competition to dominate AI regulation."
For companies, navigating the changing tides of AI regulation requires a combination of knowledge and foresight. There are dangers of over- as well as under-compliance. Despite the fracturing of global consensus on AI ethics, there are nonetheless some global trends which can assist companies seeking to create their own AI strategies. Values such as transparency, avoidance of unwanted bias, and close attention to the ownership and appropriateness of input data are all likely to be desirable for organisations operating in any jurisdiction.
So, what does this all mean for those wanting to use AI here and now? Amidst plenty of lofty rhetoric and ambition, it can be hard to discern the best course of practical action. Those adopting AI therefore must take steps to show they are responsible actors. Simply waiting for external regulation won’t suffice.
There are pragmatic measures you can take. Legal advice on AI Act compliance is now essential for any organisations operating in the EU. Develop an AI ‘code of conduct’ for company employees, produce codes of practice for using technology in relation to stakeholders, partners and clients, hold internal ‘responsible use’ and ‘internal regulation’ workshops to enable all parts of a business to understand the benefits and risks of AI. Share and learn from best practice taken from industry forums internally and with clients.
Putting the values outlined above at the heart of their AI approach can ensure entities are keeping in lockstep with the prevailing wider mood. By combining the underlying technology and its proper governance with a suitable communications strategy and practical, deliverable implementation measures, companies can make sure they are writing the right stories on AI and inspire confidence in its potential amidst a cacophony of background noise.