Is Artificial Intelligence Ready to be the Backbone of Our Security Systems?

0
6
Is Artificial Intelligence Ready to be the Backbone of Our Security Systems?

Artificial Intelligence has vastly improved in the last decade to the point where AI-powered software has become mainstream. Many organizations, including schools, are adopting AI-powered security cameras to keep a close watch on potential threats. For example, one school district in Atlanta uses an AI-powered video surveillance system that can provide the current whereabouts of any person captured on video with a single click. The system will cost the district $16.5 million to equip around 100 buildings.

These AI-powered surveillance systems are being used to identify people, suspicious behavior, guns, and gather data over time that will help identify suspects based on mannerisms and gait. Some of these systems are used to identify persons previously banned from the area and if they return, the system will immediately alert officials.

Schools are hoping to use top-of-the-line AI-powered video surveillance systems to prevent mass shootings by identifying guns, suspended or expelled students, and also alert police to the whereabouts of an active shooter.

AI-powered security systems are also being used in homes and businesses. AI-powered video surveillance seems like the perfect security solution, but accuracy is still a problem and AI isn’t advanced enough for behavioral analysis. AI isn’t truly able to form independent conclusions (yet). At best, AI is only capable of recognizing patterns.

AI isn’t completely reliable – yet

At first glance, AI might appear more intelligent and less fallible than humans and in many ways that’s true. AI can perform tedious functions quickly and identify patterns humans don’t see due to perception bias. However, AI isn’t perfect and sometimes AI-powered software makes disastrous and deadly mistakes.

For instance, in 2018, a self-driving Uber car struck and killed a pedestrian crossing the street in Tempe, Arizona. The human ‘safety driver’ behind the wheel wasn’t paying attention to the road and failed to intervene to avoid the collision. The video captured by the car showed the safety driver looking down toward her knee. Police records revealed she was watching The Voice just moments before the incident. This wasn’t the only crash or fatality involving a self-driving vehicle.

If AI software repeatedly makes grave mistakes, how can we rely on AI to power our security systems and identify credible threats? What if the wrong people are identified as threats or real threats go unnoticed?

AI-powered facial recognition is inherently flawed

Using AI-powered video surveillance to identify a specific person relies heavily on facial recognition technology. However, there’s an inherent problem with using facial recognition – the darker a person’s skin, the more that errors occur.

The error? Gender misidentification. The darker a person’s skin color, the more likely they are to be misidentified as the opposite gender. For example, a study conducted by a researcher at M.I.T found that light-skinned males were misidentified as women about 1% of the time while light-skinned females were misidentified as men about 7% of the time. Dark-skinned males were misidentified as women around 12% of the time and dark-skinned females were misidentified as men 35% of the time. Those aren’t small errors.

Facial recognition software developers are aware of the implicit bias toward certain ethnicities and are doing everything they can to improve the algorithms. However, the technology isn’t there yet and until it is, it’s probably a good idea to use facial recognition software with caution.

The other concern with facial recognition software is privacy. If an algorithm can track a person’s every move and display their current location with a click, how can we be certain this technology won’t be used to invade people’s priva

Read More

Leave a reply