Today, people often talk about the use of “responsible” AI, but what do they really mean?
In general, being responsible means being aware of the consequences of our actions and making sure they don’t harm or endanger anyone.
But there’s a lot we don’t know about AI. It is very difficult to say, for example, what the long-term consequences of developing machines that can think, create and make decisions on our behalf will be. It will affect human jobs and lives in ways that no one can yet be sure of.
One of the potential threats is the violation of privacy, which is generally accepted as a basic human right. AI systems can now recognize us by our faces when we’re in public and are routinely used to process highly sensitive information such as health and financial data.
So what does responsible AI look like when it comes to privacy, and what challenges do businesses and governments face? Let’s take a look.
Consent and Privacy
AI often involves using data that many of us consider private – such as our location, finances, or shopping habits – to offer us services that make life easier. This could be route planning, product recommendations, or financial fraud protection. In principle, this is all possible because of consent – we consent to the use of our information. Therefore, its use does not violate our privacy.
Acting with respect for consent is one way businesses can ensure they are using AI responsibly. Unfortunately, this does not always happen!
For example, in the Cambridge Analytica scandal, personal data was collected from millions of Facebook users without their consent and used for political modeling.
Businesses and even law enforcement agencies have faced public backlash for using facial recognition technologies without taking adequate steps to obtain consent.
An important question is when does consent become invalid because the scope is so broad that it can be interpreted in ways that the consenter never intended? Or if the terms and conditions given when seeking consent are so complex that they are often misinterpreted?
Systems and processes to obtain clear, informed consent must be built into the core of any AI system – if privacy is to be managed responsibly.
An example is the generative AI tools provided by software company Adobe. They differentiate themselves from their competitors (such as OpenAI’s ChatGPT) in that they only train on data where creators have explicitly given their consent.
Data security
Whenever there is an obligation to maintain confidentiality, data must also be kept safe and secure. We can collect all the consent in the world when we collect data, but if we fail to protect it, we’ve let our customers down to protect their privacy. That is quite irresponsible!
Data theft and breaches are getting bigger and more damaging all the time. In late 2023, an attack on transcription service provider PJ&A compromised the sensitive health care records of nearly 14 million people. And about nine million were affected by the ransomware attack that targeted MCNA Dental.
In another incident, hackers accessed feeds from more than 150,000 security cameras collected by software company Workada, which was involved in training facial recognition technology. The footage showed activity in prisons, hospitals, clinics and private premises.
Taking responsibility here means ensuring that security measures are up to the task of defending against today’s most sophisticated attacks. As well as predicting and preventing the threats and attack vectors that are likely to emerge tomorrow.
Personalization vs Privacy
One of the big promises of AI is more personalized products and services. Rather than tailoring to groups of people similar to me, I’ll buy insurance that specifically covers my own needs and risks. Or a car that understands my own driving habits and my likes and dislikes when it comes to in-car entertainment and climate control.
It sounds great, but customizing experiences clearly comes at the cost of our privacy. This means that companies collecting data for this purpose must develop a clear understanding of where to draw the line.
One way to deal with this is through on-device (edge computing) systems that process data without ever leaving the owner’s possession. These systems can be difficult to design and build because they have to run in the relatively low-power environment of a user’s smartphone or device rather than a high-performance cloud data center. But it’s a way to handle privacy responsibly when providing personalized services.
We also have to be careful not to get too personal – users can easily “creep out” if they realize the AI knows too much about them! Understanding what level of personalization is genuinely helpful to the user, and what crosses the line as intrusive, is key here.
Privacy by design
Consent, security, and balancing the line between personalization and invasion of privacy are the cornerstones of building responsible AI that respects privacy. Getting it right requires a careful look at our own processes and systems, as well as the rights, feelings and opinions of consumers.
Getting it wrong means eroding consumer confidence in our AI-powered products and services, ultimately reducing their chances of achieving their potential.
I have no doubt that we will see plenty of good and bad as companies anticipate and adapt to society’s changing standards and expectations. Legislation will have a role, and we have seen steps towards this in initiatives such as the EU AI Act. But at the end of the day, it will be up to the people who develop and sell these tools — as well as us users — to define what it means to be responsible in the fast-paced world of AI.
follow me Twitter or LinkedIn. check this out My website or some of my other work here.