Britain set to rethink AI surveillance amid big-tech data boom

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Head of of Great Britain Financial regulator Plans announced To discover how great technology is. of companies Access to vast data can lead to better financial products and more. Options for users.

Regulatory change seeks to maximize artificial intelligence (of AI) The potential for innovation, competitive pricing and expanded options for consumers and businesses. The move reflects a global trend of testing and potentially using technology. of companies Power with new regulations.

“THis announcement is interesting in that the UK appears to be taking a different approach to innovation than the EU.Gail Rangelco-founder and CEO mineA global data privacy management firm told PYMNTS.The EU, which just passed the AI ​​Act, regularly goes out of its way to regulate technology before it reaches the market. United Kingdom Taking the perspective of Working hand-in-hand with Big Tech to harness data insights and help build better products instills a lot of confidence in business and the free market.

He added, “Ohne approach is better than the other, as the EU prioritizes end-user security and privacy and traditional places like the US prioritize end-products, but seeing the UK start to move further away from the EU makes the AI ​​space hot. goes

Call to process data.

During A presentation on Nikhil Rathi, who led a program of the Digital Regulation Cooperation Forum of Great Britain The Financial Conduct Authority (FCA) and the forum chairs, explained their main concerns with the big technology companies. Rathi mentioned that if FCA The analysis shows that technol Company' Data can benefit financial services, they will encourage more data sharing between tech and financial. Companies

“TDominance by a handful of firms and further entrenchment of power will threaten competition and innovation.Rathi said in the speech.And, as well as promoting effective competition, the FCA's primary objective is to protect consumers from harm.

FCA also issued one. Statement of opinion Regarding the request for input on data sharing practices between Big Tech and financial services firms. While Big Tech companies has access to Financial data through Open Banking, they are not obligated to retaliate by sharing their data with the financial sector.

Rangel notes that the bigger the dataset, the more insights you can draw from it and the more reliable the baseline you can create for AI or other products.

“THose advantages, especially when combined with data collection and scraping methods that do not violate user privacy or security, can lead to innovation that leads to faster and more intuitive technologies in the consumer market. Is,he added.

Growing calls for regulations

The decision by UK regulators to review their approach to the use of AI and data in big tech reflects a wider global trend that has been reflected in several recent regulatory initiatives. For example, the European Union has passed a Clean AI Act.

In America, there have been Increased testing Under the Biden administration, which has advocated for more stringent enforcement of antitrust laws, particularly those related to Tech's Data methods. The Chinese government Implemented strict data protection laws. And has cracked down on the already unregulated expansion of tech firms like Alibaba and Tencent.

As PYMNTS Reported earlierBritain is clearly adapting.Modernism” position on the AI ​​Regulation, apart from its EU counterparts, who have unanimously agreed on its final text. of the European Union AI Act. gave The AI ​​Act adopts a risk-based framework for regulating AI applications. Once This Is Implementedit will affect every AI company serving the EU market and any customers of AI systems within the EU, although it does not extend to EU-based providers serving outside the bloc.

In contrast, the UK government prefers an alternative regulatory framework that differentiates AI systems based on their capabilities and the consequences of AI risks rather than just the risks themselves. According to the UK of the government In response to a consultation on AI regulation in February, the plan is to implement sector-specific regulation guided by five core AI principles, rather than implementing specific AI legislation. This approach aims to foster innovation by more closely tailoring regulation to the different. fields Specific requirements and risks.

Benoit KoenigCo-founder of Visionwhich makes AI-powered gesture recognition software, told PYMNTS that the EU AI Act is essential to building trust in AI technologies.

“For businesses operating within the EU, will need to pay more attention to compliance, particularly for AI applications that are considered high-risk, which may include areas such as surveillance and biometric identification;he added.This can increase operational costs and demand more rigorous testing and documentation processes.

US companies or customers with EU operations must adapt their AI strategy to comply with the upcoming AI Act.Koenig said.

“It could also serve as a precursor to similar regulations in the US, encouraging businesses to proactively adopt more stringent ethical standards for the development and use of AI.he added.

Overall, while the Act presents some challenges, it also provides an opportunity for businesses to lead the way in the ethical use of AI, fostering innovation that is not only technologically advanced but also socially responsible. It is also responsible and trustworthy to the public.

For all PYMNTS AI coverage, subscribe daily. AI Newsletter.


WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment