Facebook’s virtual assistant M is dead, so are chatbots

Capture.JPG

An interesting viewpoint from Wire on the rise and fall (?) of chatbots.

It was easy for M’s leaders to win internal support and resources for the project in 2015, when chatbots felt novel and full of possibility. But as it became clear that M would always require a sizable workforce of expensive humans, the idea of expanding the service to a broader audience became less viable.

M’s core problem: Facebook put no bounds on what M could be asked to do. Alexa has proven adept at handling a narrower range of questions, many tied to facts, or Amazon’s core strength in shopping.

Another challenge: When M could complete tasks, users asked for progressively harder tasks. A fully automated M would have to do things far beyond the capabilities of existing machine learning technology. Today’s best algorithms are a long way from being able to really understand all the nuances of natural language.

“We launched this project to learn what people needed and expected of an assistant, and we learned a lot,” Facebook said in a statement. “We’re taking these useful insights to power other AI projects at Facebook. We continue to be very pleased with the performance of M suggestions in Messenger, powered by our learnings from this experiment.”

https://www.wired.com/story/facebooks-virtual-assistant-m-is-dead-so-are-chatbots/

Google Photos Still Has a Problem with Gorillas – MIT Technology Review

 

Capture.JPG

In 2015, Google drew criticism when its Photos image recognition system mislabeled a black woman as a gorilla—but two years on, the problem still isn’t properly fixed. Instead, Google has censored image tags relating to many primates.

What’s new: Wired tested Google Photos again with a bunch of animal photos. The software could identify creatures from pandas to poodles with ease. But images of gorillas, chimps, and chimpanzees? They were never labeled. Wired confirmed with Google that those tags are censored.

But: Some of Google’s other computer vision systems, such as Cloud Vision, were able to correctly tag photos of gorillas and provide answers to users. That suggests the tag removal is a platform-specific shame-faced PR move.

Bigger than censorship: Human bias exists in data sets everywhere, reflecting the facets of humanity we’d rather not have machines learn. But reducing and removing that bias will take a lot more work than simply blacklisting labels.

 

Full article here: https://www.technologyreview.com/the-download/609959/google-photos-still-has-a-problem-with-gorillas/

What is decentralized AI

 

A new buzz word is out: Decentralized AI

Capture

What is it for?

• You need an autonomous AI solution that runs in the decentralized environment and implements contractual obligations

• You need an AI optimized for the on-device performance and not dependent on network connectivity

• You want to sell your AI algorithms while maintaining proprietary rights

Full article here:

https://www.forbes.com/sites/forbestechcouncil/2018/01/11/decentralized-artificial-intelligence-is-coming-heres-what-you-need-to-know/#2927962e146d

Facebook is shutting down M, its personal assistant service that combined humans and AI

Facebook M“We launched this project to learn what people needed and expected of an assistant, and we learned a lot,” the company said in a statement. “We’re taking these useful insights to power other AI projects at Facebook. We continue to be very pleased with the performance of M suggestions in Messenger, powered by our learnings from this experiment.”

 

Further details here: https://www.theverge.com/2018/1/8/16856654/facebook-m-shutdown-bots-ai

Happy new year from Lili.ai team

Dear Supporter,

I would like to seize this opportunity to thank you for your support. 2017 was a wonderful year for Lili.

I am happy to report that:

  • we are growing our customer base with three tier 1 French corporations and numerous small teams
  • we have been selected as one of the 59 teams remaining in the IBM Xprize global competition
  • we have received the CogX AI Innovation award in the Rising Star category

Wishing you all the best in your personal and professional lifes for 2018!

 

Capture

 

Capture.JPG

Canadian government to hire company that uses artificial intelligence to identify online suicide-related behaviour

Using artificial intelligence to analyze social media trends, the company hired by the federal government claims it could predict surges in suicide rates, including the precise region where they will occur.

 

‘We’re not violating anybody’s privacy’

Advanced Symbolics said its artificial intelligence looks for trends, not individual cases.

“It’d be a bit freaky if we built something that monitors what everyone is saying and then the government contacts you and said, ‘Hi, our computer AI has said we think you’re likely to kill yourself’,” said Kenton White, chief scientist with Advanced Symbolics.

Instead, the AI will flag communities or regions where multiple suicides could be likely. For example, Cape Breton Island was left reeling last year after three teenagers in the region died by suicide.

Full article here: http://www.cbc.ca/news/canada/nova-scotia/feds-to-search-social-media-using-ai-to-find-patterns-of-suicide-related-behaviour-1.4467167