The real danger of AI: the threshold effect

Behind any artificial intelligence lies an algorithm. Sometimes, those algorithms can create unplanned (or planned?) distortions because of
– complexity of the algorithm
– forgotten exceptions
– datasets chosen
– etc.

Here are some real-life examples listed here are really embarassing:

“Users discovered that Google’s photo app, which applies automatic labels to pictures in digital photo albums, was classifying images of black people as gorillas. Google apologized; it was unintentional.”

“But similar errors have emerged in Nikon’s camera software, which misread images of Asian people as blinking, and in Hewlett-Packard’s web camera software, which had difficulty recognizing people with dark skin tones.”

List of example here:

Be ready, AI is inviting itself into your office

Are you ready to collaborate with AI?

AI software which understands and answers work-related questions has been made available in the UK. Starmind uses machine learning to understand queries, then source answers from previous staff conversations on a subject or track down experts within the company who are able to help (stock image)

‘Starmind acts like an artificial hyper brain that seamlessly exists at the core of a company,’ Mr Kaufmann added.

The algorithm is then fuelled by the know-how stored inside the brains of everyone that engages with the system.

Full article:

The first fatality related to AI technologies

See original image

On June 29th 2016, the first death related to autopilot was noted by Tesla.

“NHTSA is opening a preliminary evaluation into the performance of Autopilot during a recent fatal crash that occurred in a Model S. This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles. Worldwide, there is a fatality approximately every 60 million miles. It is important to emphasize that the NHTSA action is simply a preliminary evaluation to determine whether the system worked according to expectations.”

The end of the article highlighted how Tesla is trying to protect the driver against itself.

“It is important to note that Tesla disables Autopilot by default and requires explicit acknowledgement that the system is new technology and still in a public beta phase before it can be enabled. When drivers activate Autopilot, the acknowledgment box explains, among other things, that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it. Additionally, every time that Autopilot is engaged, the car reminds the driver to “Always keep your hands on the wheel. Be prepared to take over at any time.” The system also makes frequent checks to ensure that the driver’s hands remain on the wheel and provides visual and audible alerts if hands-on is not detected. It then gradually slows down the car until hands-on is detected again.”

Full article: