Artificial Intelligence - Privacy and Ethical Risks

2 Aug, 2021

The concept of artificial intelligence (AI) is not something new to the public, we have seen it in sci-fi movies and increasingly in our daily lives as well. One of the most prevalent usages of artificial intelligence technology is facial recognition software. Facial recognition software is advancing far more quickly than regulation can handle it. The usage of artificial intelligence is increasing and transforming industries such as law enforcement, retail, hospitality, marketing and advertising, events, social media and entertainment. The primary motivation for businesses to use AI is to provide greater convenience and a seamless experience for consumers.

Did you know?

According to MarketsandMarkets, a market research company, the post-COVID 19 global facial recognition market size is expected to grow from USD 3.8 billion in 2020 to USD 8.5 billion by 2025, at a Compound Annual Growth Rate (CAGR) of 17.2% during the forecast period.

How does it work?

Simply put, the initial stage is where facial recognition systems come up with mathematical formulas to measure and record the face, such as measuring the distance between the eyes, the shape of the chin and the size of the nose. This enables the facial recognition software to identify the same face when it sees it again in a photo or video.

The next step is to ‘train’ the AI system and this is done by feeding photos and videos into it and to see if the system correctly identifies the individual. Humans review the results, and where the computer got it wrong, the settings are adjusted so that the next time the computer will make fewer mistakes.  If all of this becomes more sophisticated, the AI system can even check its own results, which is achieved through what is called 'deep learning' where the AI system automatically sets to work figuring out the answers to its own questions.

The more images that are fed into the computer and the more it is ‘trained’, the better it becomes at getting facial matches correct or at observing changes correctly.  The problem with AI generally, but also when it is used in facial recognition systems, is that it is only as good as the training it receives. This can lead to unconscious bias, for example, or a system that performs better in some circumstances rather than others, resulting in privacy risks.

Artificial Intelligence Problems

Unconscious bias

In 2019, Facebook was allowing its advertisers to intentionally target advertisements according to gender, race, and religion. For instance, women were prioritised in job advertisements for roles in nursing or secretarial work, whereas job advertisements for janitors and taxi drivers had been mostly shown to men, in particular men from minority backgrounds. This happened as the information used to train the AI system in job matching used these priorities. As a result, Facebook now no longer allows employers to specify age, gender or race targeting in its advertisements.

Unconscious bias or an inadequately trained AI engine can give rise to serious problems when used in policing and these problems have been recognised and have led to its use often being banned.  If the facial recognition system is trained using the images of one particular group, it may be ineffective at recognising the faces of other groups of people. When a system is used to detect potential terrorists in crowds and it makes errors, that can have very severe implications for the person incorrectly recognised.

Operating Issues

Although facial recognition systems are improving in reliability, there are still reports of systems being fooled. For example, person A holds up a photograph of person B for the camera to detect and recognise for entry into a building. The camera admits person A because it recognises them as person B.

What happens when a teenager holds a tablet in front of the face of a parent napping in front of the TV? If the facial recognition unlocks the tablet, it may also give the teenager access to the parent’s credit cards, online shopping accounts and enable the teenager to go on an unauthorised shopping spree.

Mitigation of Privacy Risks

As consumers, it is necessary to understand when one is making a trade-off between convenience and potential privacy risks. We should always be mindful and opt out if the organisation permits and the individual is uncomfortable with the idea. For instance, in the US, travellers (U.S. citizens) can choose to opt-out of facial recognition technology by approaching CBP Officers, airlines, airports or cruise line representatives to seek an alternative means of verification.

In our day-to-day lives, it might be good to impose some house rules if individuals have young kids or teenagers at home who may try to unlock their parents’ phones to buy things online or just as a prank.

How do organisations find a balance between ethics and benefits?

While big data management and analytical data simulations have benefited businesses, questions remain regarding privacy and data ethics in business. To learn more about artificial intelligence and the ethical considerations involved, sign up for our course to get a better understanding of the different artificial intelligence frameworks and regulations across the globe.


Adapted from an interview with: 
Kevin Shepherdson, CEO Straits Interactive - Fellow of Information Privacy, CIPM, CIPP/A, CIPP/E, CIPT, Exin (GDPR, Infosec), GRCP

The views and opinions expressed in this article are those of the author and do not necessarily reflect the official view or position of DPEX Network.


  Related Articles
Heightened Demand for Data Protection expertise

Well, this was going to happen at some point in time in the world - with the ex…


Recommendations of Public Sector Data Security Re…

In the wake of major breaches, the Public Sector Data Security Review Committee…


Compliance Trends you better leave behind in 2019

Now that we are starting a new year, we can reflect on a few compliance trends …