Addictive social media apps not only about better UX! Artificial intelligence at the core of Facebook’s growth strategy and the pitfalls of it

 “If, then, I were asked for the most important advice I could give, that which I considered to be the most useful to the men of our century, I should simply say: in the name of God, stop a moment, cease your work, look around you.”  Leo Tolstoy, Essays, Letters and Miscellanies

The other day I was reading an article published in the MIT Technology Review about Cambridge Analytica, a consultancy that worked on Donald Trump’s 2016 presidential election campaign. The firm which had siphoned the personal data of millions of US citizens from their Facebook accounts with an objective of influencing presidential voting.

IMO the bigger issue isn't about data getting stolen / misused which are anyways posted by the users by their own volition. The social media portals can devise security mechanisms to preempt such occurrences in future. The bigger problem and the true problem lies in the core strategy of social media portals in general and Facebook in particular.

Among all metrics tracked at Facebook like the login active customers (which is how many people have logged in Facebook at least once in last 30 days) there is another metric called L6/7 which stands out. This metric shows how much of user engagment happened in last 7 days. Technically measuring the propensity of people to use the platform to view, like, share or comment in the 6 of last 7 days at any given day.

The man responsible for catapulting Facebook to be a AI powerhouse is Quinonero. In his last 6 years at Facebook he has created ultra personalised algorithms which can target users based on their likes and choices to get maximum engagement with the platform. In a nutshell, how it all works is, keep showing a person things which scintillated him or her and then keep showing it more and more for higher engagment for that user. Each user will see what thrills him to be engaged as per his behavioural data.

The obsession of Zuckerberg to get the whole world use Facebook more and more and to get Facebook have a lions share in the advertising dollars had a new solution in the form of Quinonero created AI algorithms. Engineers at Facebook had previously played with, among other things, changing design, using notifications, tweeting the UI for better UX once a user is brought on the platform for longer engagement duration. The new AI based algorithms are creating more personalised feedback loops for tweaking and tailoring each user’s news feed to keep nudging up engagement numbers. Which is where the lies the root of issues we have seen in recent times. For example, a depressed person will post and share melancholic content; He will view sad and depressing news on the platform. The Facebook's AI model will learn from this and show him more of the content which results in him engaging more with the platform. Ultimately growing the L6/7 metric. The algorithms that are designed to grow Facebook’s business weren’t created to filter out what was false or inflammatory. They are simply designed to make people share and engage longer and frequently. Few of the events which happened in different parts of the world can be attributed to this thirst for pushing a certain content to users which appeals to them - like the genocidal pogrom in Myanmar and the rioting at US Capitol.

When one is relentlessly focusing on growing at any cost and focuses on tracking business and platform engagement metrics, he is not worried about the what might happen on streets. If AI figures out that certain type of content does not lead to any engagment or low engagment while conspiracy theories, divisive views, strong opinions lead to higher, intense engagment then so be it. By tweaking the AI algorithm to slow down a certain view will finally hurt the user engagement metrics and a firm which depends on users coming and using their platform will avoid that from happening.

Writing algorithms is a Computer Science student's 101 course in collage. However, traditional algorithms are hard coded where humans create a set of instructions which the machine needs to follow. The decisions and conditions are coded by person as per uses cases at hand. Machine learning based algorithms “train” on input data to learn the correlations within it. The machine learning model which is a trained algorithm automates all future decisioning like a human. For example an algorithm trained on past ad click data will learn that in December and January men click on fitness content more often than women. The trained model will then serve more of those ads to men during those months. Facebook has massive amounts of data of more than 25% of the total world population - ball park figure - and possibly more in some countries with higher penetration of internet and mobile. This advantage will let the engineers develop as many models as they want by tweaking some variables to see which one can drive the engagement better than others and roll it out to on the platform. The more hyper personalised the targeting, the better the chance of a mouse click or mobile press. Resulting on more users staying engagment with ultimate result in hyper personal ads leading to the advertisers having more traffic and leads for sourcing their business. The only things engineers need to do it, create models, train models and test models. Data is already present in tones.

The catch is that just as algorithms can be trained using the machine learning models to present adverts they can also be trained for presenting content for consumption, posts and suggest groups to users and keep them hooked. The data of UX and notification based addiction and persuasion has been taken over by a more powerful and potent tool of learned machine learning models who understand each users psychology and helps deepen the prejudices he/she might have.

The model development platform which Quinonero built for anyone in Facebook to access, called FBLearner Flow, allows engineers with little to no AI and machine learning experience to train and deploy machine-learning models in days. If a model affects the likes, comments, sharing and engagement it is discarded - if otherwise it is adopted and retrained for even better results.

Empirically, people who have polarising views will help facilitate the spreading of information by reading and sharing content appealing to their worldview. By disallowing polarising views Facebook will run risk of lowering the engagement. While social media did help as a catalyst to Arab Spring moment, it was a matter of time the other edge of the AI sword revealed itself to let social media facilitate spread of extremism and fake news.

The machine-learning models trained to identify fake news and hate speech will not be able to stop it. Misinformation evolves but the machine learning models cannot evolve at the same pace. A model trained to catch fake news about one situation will fail to identify fake news in other contexts. Humans will evolve in the way information created and spread by using euphemisms for extremism and image-text combination which makes it harder for trained machine leaning models to identify and filter out.

The salary and pay of engineers will be tied to platform engagement and growth metrics. The metrics for filtering fake or harmful content will be a subjective matter. AI is just a technology which the society and businesses need to use carefully. Regardless of whether Facebook used AI or not it is the users and society at large to stop creating and sharing vitriol and lies as that content would still spread across any platform sooner or later.

Swapnil Jadhav

Banking Profesional

IIT Bombay

IIM Ahmedabad

Comments

Popular posts from this blog

Swapnil Jadhav word cloud

Word cloud test