Out-Law Analysis 4 min. read
23 May 2024, 1:21 pm
The use of artificial intelligence (AI) in sport continues to surge with the technology being employed in areas such as training and performance, security and safety, and fan engagement.
With the huge benefits this revolutionary technology can bring to the sports industry, sporting entities should also have an eye on the risks and take action accordingly.
AI use in sport has grown exponentially in recent years, and the market for AI services in sport is expected to grow from $1.85 billion in 2023 to $2.43 billion in 2024. This represents a compound annual growth rate of 31.2%, according to the latest Research and Markets report.
However, along with the exciting new experiences and improvement AI can bring comes a new set of risks.
One of the key uses of AI in sport currently is in scouting and onboarding. This involves the use of machine learning algorithms that use data to evaluate players’ skills, ranking them in various criteria. This allows sports clubs to make better recruitment choices.
Blockchain technology can also be used to streamline the process of drafting and managing player contracts. Smart contract, powered by blockchain, can automate various aspects of player contracts including payment terms, performance incentives, and royalty distribution. This can help contracts to be more transparent and enforceable, reducing disputes.
Training and performance can also be boosted by the use of AI. For example, AI simulations can assist with technique and skill development through video analysis and biomechanical assessment, enhancing training experiences by facilitating data-drive decision-making for coaches and athletes.
Ticket sales and other venue-related tasks can be assisted by AI. Ticket sales can be predicted by analysing factors such as opponent strength and historical attendances, providing valuable insights for ticket pricing. AI video analytics can also identify hot spots within the stadium, enabling precision-targeted advertising that caters to fan preferences.
Computer vision can allow crowd density to be monitors inside stadiums to provide an extra layer of safety. Face recognition can also work towards a crackdown of ticket fraud and counterfeit merchandising to enhance transparency and security for sports firms, protecting fans from scams while maintaining brand integrity.
Additional benefit for fans includes AI-powered chatbots to provide personalised engagement, analysis of social media campaigns to drive engagement, and smart controls for better pay outs to avoid delays or disputes.
Media and broadcasting across the sports industry is also being improved with the help of AI. For example, AI can help choose the most appropriate camera angels during a match as well as providing accurate and timely statistics to improve real-time commentary.
AI technology is still largely in its infancy, and it still needs human intervention and verification. However, there are other risk-limiting things that can be done to ensure the safe and accurate use of AI in sport.
Having a clear and well-communicated policy on permitted AI use across an organisation is crucial for the safe use and deployment of AI technology and AI generated content. This policy should be backed up by staff training and shared with the organisation’s partners and suppliers to ensure complete visibility and transparency of when AI is being used within the organisation and what impact this may have.
Compliance with the data protection laws, such as the UK General Data Protection Regulation, is also another key consideration for organisations. The nature of AI (i.e. a new form of technology that consumes vast amounts of personal data) means that it could be viewed as a high-risk processing activity. Adoption of AI systems does not require a wholesale change to an organisation’s existing data protection compliance framework. Reviewing prospective AI systems, typically in the form of a “data protection impact assessment”, like other potentially high-risk processing activities would work. AI, alongside ad tech and children’s data, is one of the three priorities for the UK’s Information Commissioner’s Office in 2024.
Use of an organisation’s proprietary data within generally available AI solutions is to be avoided unless due care is taken. Instead, the use of private AI solutions specific to an organisation can help minimise risk. This allows data to be ring fenced and regularly backed up without the risk of crossing with data outside of your needs. However, the feasibility of these solutions can cause issues in reality. Procuring private AI solutions is costly with more generally available AI applications often very useful, and the inevitability that individuals across the organisation are likely to be using these applications anyway.
If using specific solutions, it is imperative that the data on which these solutions are trained is going to be fit for purpose. It means that taking the time to cleanse and verify data is critical, excluding bias and discrimination in AI output.
The UK is yet to introduce legislation specifically covering the use of AI, but awareness and understanding of regulatory framework is important. Things are changing fast, with the introduction of the US executive order on the Safe, Secure, and Trustworthy Artificial Intelligence in October 2023 along with a draft EU artificial intelligence regulation passing though the EU parliament in March this year. Awareness of any regulation and rule changes is crucial.
The terms on which AI solutions are being made available to sports industry organisations should also be carefully checked to ensure the extent of liability exclusions and caps are understood and the organisation is aware of the full risk to the organisation of the adopting and using the AI technology. Due diligence on the financial standing and insurance cover of smaller AI market players may also be advisable, as whilst these entities may well be offering more favourable terms of use, they may well not be in a position to stand by their contractual liability.