Privacy & Privilege

The fallout from the Cambridge Analytica (CA) scandal has become a watershed moment, not just for social media, but also for the corporate world, and for the key companies involved.

The Guardian newspaper headlined its story as “Facebook’s week of shame: the CA fallout” and noted that “US$60 billion was wiped off Facebook’s market capitalization in wake of Zuckerberg’s silence over the data breach.”

CA’s parent, SCL Elections Ltd, announced in early May 2018 that it was filing for insolvency in the UK and the US and closing all of its operations. The London-based company said CA had faced “numerous unfounded accusations” and been “vilified for activities that are not only legal but also widely accepted as a standard component of online advertising in both the political and commercial arenas.” However, corporate registration documents in London indicate that several SCL and CA executives have launched a new venture called Emerdata Ltd, The Globe & Mail reported on May 2, 2018.

Has this been a wakeup call for social media users? Will it impact BDA and AI solutions?

Yes and no. Yes, for literate and informed users, mainly in countries directly hit by data captured/analysed by CA. No, for the millions of users across Asia, Africa, Latin America. As well as a few other millions who don’t quite care. After all, it’s not just social media that’s capturing your data.

Take loyalty cards for example. Most consumers know that loyalty cards are used to track their behaviour and that the data is sold to marketers. Would they stop using these cards if they knew? “Research shows that people in surveys say they want to maintain their privacy rights, but when asked how much they’re willing to give up in user experience – or to pay for it – the result is not too much,” an article in the Knowledge@Wharton noted. In other words, there’s a difference between how we care about privacy as an idea, and how much we’re willing to give up to maintain it.”

On the other hand, BDA (big data analytics) and AI (artificial intelligence) technologies are not yet at an inflection point – they soon will be – where it is possible to either collect all the data or to use all of that data for what is called actionable intelligence. On the other hand, not all data captured either directly or indirectly is harmful. However, after the Cambridge Analytica scandal broke, customer privacy became a hot potato.

In April, two US Senators (Democrats) introduced a privacy “Bill of Rights” to protect US consumers’ personal data. Called CONSENT (Customer Online Notification for Stopping Edge-provider Network Transgressions) Act would require the FTC (Federal Trade Commission) to establish privacy protection for customers of “edge” providers – or entities providing direct connection/apps with the user – like Facebook and Google.

Like the GDPR (General Data Protection Regulation) in the EU, the US CONSENT Act wants companies to obtain opt-in consent from users to use, share or sell users’ personal information, and notify users about all collection, use, and sharing of users’ personal information, as well as notify them in the event of a breach. These practices are being made into regulations already in countries like Singapore, and will soon be adopted worldwide.

Data  Degrees

On the flip side, not all data captured is harmful. For example, if there’s a major fire in a specific locality in where many people can be potentially trapped or burnt, LBS (location-based services) and identification of people to alert them can save thousands of lives. The same goes for infectious disease prediction and management, and for tracking criminal activities.

Can AI and ML (machine learning) diagnose, monitor and/or prevent the abuse of privacy and security? It absolutely can. This can be done both by BDA which can throw up the analytics for the AI engine to test the accuracy for “false positives”. As stated earlier, not all analytics is detrimental. So the key is not to enhance privacy for its own sake, but to ensure that the \”actionable intelligence” is not being abused – nor any legal regulations breached – for commercial gain.

Another key question: Can AI save lives? AI engineers at the Houston Methodist Research Institute developed software in 2016 that can accurately diagnose a patient’s breast cancer 30 times faster than doctors could. When fed mammogram results and medical histories of about 500 patients, the software diagnosed breast cancer with 99% accuracy, according to an IEEE report. The software also produces fewer false positives than doctors do.’