How businesses can futureproof their AI strategies
Getting ahead of the UK Government’s AI whitepaper On the 29th March 2023, the UK Government released its plans to balance AI risks with innovation, taking its first step to harnessing the true potential of the technology. The AI regulation whitepaper follows growing concerns around the ethical...
Getting ahead of the UK Government’s AI whitepaper
On the 29th March 2023, the UK Government released its plans to balance AI risks with innovation, taking its first step to harnessing the true potential of the technology. The AI regulation whitepaper follows growing concerns around the ethical implications of generative AI and the government’s commitment to cement the UK as a world leading hub for the technology by 2023. With the government announcing an initial £100 million investment, AI is set to raise global GDP by 7% over a decade.
Whilst the whitepaper sets the foundations, practical moves towards responsible AI are more complicated and firms may not know where to begin. Guidance from specific regulators remains vague and the translation of the broad whitepaper’s principles into tangible actions may be a stumbling block for many. BIP UK, today identifies ways businesses can fill in the whitepaper gaps and investigates how firms can develop their unique AI strategy responsibility and effectively.
A personal code of conduct:
Although AI regulation is being developed, such as the whitepaper and EU AI Act, implementation is yet to hit the mainstream. The whitepaper’s five principles: “safety, security and robustness”, “Appropriate transparency and explainability”, “Fairness”, “Accountability and governance” and “Contestability and redress” are broad and subjective to companies, risking potential misinterpretation.
To enforce the rules, businesses must take action into their own hands. Firms should look to implement and embed personal AI codes of conduct and guidance into their own organisational cultures. Formalising AI model development processes and incorporating ethics into AI governance mechanisms, for instance requiring sign-offs for any changes to existing AI systems, is an effective route towards responsible usage of the technology.
Flexibility can further foster an ethical environment. Businesses must question the appropriacy of using AI for a certain task. It is important to weigh up the pros and cons of whether it will generate the most suitable result when carrying out assessments of AI use cases and proposing new solutions.
Streamlining culture and cooperation:
Critical to channelling the potential of AI is having a culture that embraces change and uses agile methods to introduce new ways of working. Everyone needs to understand the reason and benefit of change, rather than fearing it.
It’s also important to have clear communication and designated roles within a company. Defining who is accountable for the outcome of a model and who is responsible for the technical decision is essential for creating a successful team and encouraging careful thought when managing the technology.
As AI is a relatively new but flourishing field, legislation is ever-changing and adaptability is inevitable. Cooperation between the public and private sector will be key to keep up; Businesses must engage directly with the public sector by underscoring any regulation stifling AI development and the public sector simultaneously needs to be actively engaged with leading businesses and receptive to their evidence.
Developing trust through transparency:
AI’s rapid acceleration has led to real societal and governmental concerns. Italy recently banned OpenAI’s ChatGPT product, citing violation of the EU’s existing GDPR and the Future of Life institute called for a pause of at least six months of the training of AI systems.
With accusations against AI’s bias nature and its perceived threat to future job losses, the whitepaper adopted a socially conscious approach towards the technology. The regulation highlighted the need to restore public and employee trust, a crucial asset for business growth. A sole focus on profit can damage a business’s reputation and AI must incorporate this mindset. Considering a recent study by DotEveryone found 79% of technology workers would like more practical resources to help them with ethical management, businesses can go one step further in ensuring transparency.
Maintaining company-wide AI literacy and greater internal education on technology is a useful step in the visibility process. Interactions with AI systems must be labelled, with clear processes for consumers to obtain information about the model or request a human review. Additionally, creating an AI model catalogue with a common AI metamodel is an effective tool. Model cards, such as those utilised by Hugging Face, contain handy, accessible information on the product. Providing details and open communication keeps everyone in the loop, re-building confidence in the use of AI.
Josep Alvarez Perez, Head of Bip xTech UK
“The governments whitepaper is a welcome addition to the ever-growing discourse around the need for regulation on AI use within society. Yet, with expanding legislation an inevitability, businesses need to get ahead of the game. Taking the right preparatory steps can help companies to maximise the value of AI in an ethical way. The key steps here, including transparency, cooperation and personal regulation, are important considerations for guiding the journey towards responsible AI.”