How we all love “page turner” books, and hate “addictive” algorithms -in public.

Bad bad algorithms make us spend hours. Great books make us turn pages. We need more time away from the screen. We need to stop those algorithms. Right?

Ismail O Postalcioglu
10 min readSep 19, 2023

[Originally published here]

Among the decisions for the Digital Services Act, the EU has an important request from platforms: they must offer users an option to opt out of content suggestion algorithms.

From Instagram, to TikTok and Snapchat, all platforms are complying with this rule.

But will it work?

And, could there be a better approach?

“Algorithms made me watch it!”

“In theory, a personalized algorithm promises better videos; in practice, it makes it easier for such platforms to dictate what kind of content we encounter.”

Nita Farahany

Many experts, like Farahany, are sceptical about the content suggestion algorithms. They see them as a tool for “dictating” content.

Here’s the irony: if a platform “dictates” content, it goes down with a slow death.

Just ask Tumblr.

It’s been 5 years and Tumblr is yet to repair the damage.

Platforms do not create algorithms to show us what they want: on the contrary, they build them to understand how they can serve what we want. Any platform that does otherwise loses users to another.

In the end, what made TikTok a success?

Vertical videos? No, we had them since Snapchat, Stories and even Vine. Comedy? Nope. Filters? No. Music? TikTok, under the name “Musically”, had music long before it turned into a social media phenomenon. But something was missing:

TikTok became a success when they used machine learning to serve content to the users based on what they liked, instead of who they followed.

This approach disrupted Meta and YouTube, who based their main algorithms on our following lists until 2022.

According to TikTok CEO Shou Chew, this was the main idea since the beginning, but -of course- AI and machine learning made this idea really work.

What makes LinkedIn grow so fast in 2023? Their relentless effort to integrate Microsoft’s know-how into the algorithms to introduce us to the right people.

What motivated Elon Musk to create a “For You” feed on Twitter so early? His insight into our urge to reply when we read the ideas of strangers.

Why is “Explore” tab so important for Instagram?

Because it’s their answer to our main social need.

We need to explore strangers

We are way past the times when we followed the people we already knew and liked their wedding photos.

First generation social media gave us a way to connect with our friends and family: a method to re-connect with our past and forget about it again.

Second generation social media gave us a way to discover new people through their content: it gave us the “influencers”.

Third generation social media is a shrine to the genuine human experience.

We came to understand that some influencers were merely glorified users. Some were simply early adopters with inflated follower counts. Some were exceptional content creators -others, not so much.

We realized that an ordinary stranger with 50 followers could be funnier than some Netflix specials.

Now we want to find more strangers with great ideas, orange cats or cringe moments — whatever works for us.

We have a humane thirst for others’ experiences. Language emerged from this thirst and propelled us into the 5,000-year-long era of reading and writing. Now, AI algorithms are the new fountainhead.

That’s why TikTok is rising with its machine learning capabilities to introduce the right strangers to us. All the other major platforms are trying to imitate that.

That’s why even YouTube executives are confessing that they are having a hard time keeping Shorts from cannibalizing their traditional content.

We want to discover new people, and algorithms are the perfect way to harvest our interests to lead us to their content.

What if we choose our own algorithms?

“In my ideal world, I would like to be able to choose my feed from a list of providers. I would love to have a feed put together by librarians.”

Julia Angwin, The New York Times

Bluesky, the Twitter/X rival supported by Twitter’s ex-CEO Jack Dorsey, is giving users an interesting choice: developers publish alternative algorithms and you can choose the flavor you like.

Great idea. But how different is this from trying different apps?

According to Angwin, this system gives users a choice to prefer an algorithm coming from the “librarians”. By librarians, she means socially more acceptible authorities.

Some users might prefer that. But does it create a real difference in general?

A better question: would those “some users” really prefer that?

Imagine two competing algorithms: (1) an Instagram-like Explore page that serves to your true cravings, (2) a timeline full of socially acceptable content based on your public identity.

If you are telling yourself that people would choose the latter, you are lying to yourself.

There might be many alternate realities. In ours, not only rom-coms or reality shows, but even bookshops beat librarians when it comes to numbers.

If you think about it, it’s not a matter of abundance. Libraries have always been full of engaging content; they housed Bridget Jones and The Game of Thrones books long before they were popular. However, the libraries were never interested in what you are interested in.

Even though literature is full of interesting story material, I really doubt that libraries want to become as crowdy as a popular movie theatre.

If they do, library “algorithms” are broken: they see no virtue in adapting to the readers, they expect you to adapt yourself to them. If you don’t, it’s your fault.

If only they gave to an individual’s interests as much as Wikipedia or Amazon do.

…or as much as number 3 does.

As Kahneman explains in his popular and hard-to-finish book “Thinking, Fast and Slow”, your unconscious mind processes most of the information you get and most of your reactions are not controlled by your conscious identity.

People are generally honest with what they “want to want”.

What they “want”, on the other hand, is another story.

You want some content, but you cannot explain why -and you don’t have to. No matter what our rationalist side tells us, we don’t need to rationalize our “want”s.

The content you enjoy does not need to always suit your social identity and you don’t need to “cure” that difference.

You still want that content, and that content is a part of who you are.

And even though you don’t openly follow the accounts you really want, algorithms measure your actions and bring the content to you.

That’s why it works better than your following list.

Visual by Kay Kim

Everybody lies

In the past, people were complaining about the stress they got from seeing number of likes under their Instagram posts.

So, Instagram started an experiment with hiding like counts in 2019. Turned out that people hated that change. Instagram got a lot of complaints from people who wanted to see the numbers, and ended the experiment after two years. There was no point in listening to what people say. Their actions showed the opposite.

Until Ellen Degeneres shared her famous selfie, most people “hated” selfies.

Source

Or did they?

In public, they despised them. They made fun of everyone who shared a selfie.

But the moment selfies became publicly acceptible, the same people adopted them.

Suddenly, nobody had problems with selfies at all. They used to humiliate selfies because it was publicly acceptible to humiliate, until it was not.

So, do people really have a problem with AI algorithms?

Let me rephrase:

Are people honest when they complain about the content they see due to the “addictive” content algorithms?

No.

People just have a notion that they “should” feel bad about watching ordinary content by ordinary strangers. They “should” instead watch an artistic noir documentary about ordinary strangers.

The notion that a “page turner” book is a great thing but a “page turner” algorithm is harmful comes from the value we assign to these two categories.

People praise it when they “couldn’t put a book down”, but they feel corrupted when they “couldn’t put the phone down”.

We blindly accept the idea that an “Average Joe”s social media content cannot be as valuable as an author’s fiction.

This comes from the secondhand admiration we have for the “authors”. An artifact from the times when being an author was really something exceptional.

It’s not anymore.

If there is a logic behind the claim that reading any book is more valuable than enjoying any online content today, that logic is yet to be proven.

Shouldn’t we regulate algorithms?

Nobody can say that we should accept any content algorithms choose for us. Society matters.

From the moment we began living in communities, we realized the need for a balance between individual urges and societal maintenance. While rules and laws have evolved over millennia, they have all aimed to achieve this balance.

Content algorithms try their best to understand what individuals would like to see, and serve the most relevant content to them. They directly answer to our demands from the content pool we provide for them.

And the content pool they have is not controlled well.

This is the main problem: not that we enjoy their curations too much, but that some content they curate might be problematic.

So, some issues the EU is trying to address are completely legit: a country or an entity should be able to expect the platforms to block content if it is potentially harmful -especially for minors.

This is what we should focus on. Fighting algorithms is not a solution for this. On the contrary: fighting algorithms is actually fighting the potential solution.

The age of domestication

(I know this is not a perfect choice of words. I know AI has the capacity to be more intelligent than us. But I don’t take domestication as a matter of superiority here. I take it as a mutualistic process: domestication helps two “species” find a way to thrive together.)

Currently, AI is a wild entity that can either harm or benefit us. We have two options: hunt it down or domesticate it.

Blaming AI just because we don’t know how it chooses content means labeling it harmful by nature, because an AI algorithm will always work that way. This approach will inevitably cause denial. Any such approach will lead to “hunting them down”.

This is futile. People will always opt for algorithms over chronological timelines. Users will always like the AI curations better than ordinary lists of what people share. Algorithms will develop and we will need them for everything in the near future.

Denial is and has always been a waste of time and effort.

On the other hand, we can take a cue from our ancestors and choose cooperation:

Instead of focusing on what is wrong with the algorithms at the moment, we can focus on working with them.

If I were to draft a plan for the EU, I would reverse the approach: we can get help from the algorithms if we teach them what we need.

The EU designated 6 “gatekeeper” companies and four of them -Meta, Microsoft, Alphabet, and ByteDance- are experts in working with AI. These 4 also own major social media platforms.

Instagram, Facebook, YouTube, and TikTok have huge libraries of millions of content that were taken down in the past by human agents. Let’s have a joint effort to educate the AI with them, and add human confirmation for further process.

Algorithms are already taking part in content moderation. But it isn’t working well. A body like the EU can enforce rules and create a cooperation between tech giants to join forces to create a better algorithm. In the end, content moderation rules have a lot in common and the leverage the EU has can lead to a lasting result.

So, let’s stop our instinctive fear from the new, and think of the main purpose: by focusing our energy, we could detect child abuse content so fast that we could flag any new ones the moment they are uploaded.

I know such a plan will take time, and the existing moderation algorithms are faulty. But at least such a method focuses on the real problem and it’s a progressive, collective action. Treating AI algorithms as an “addiction” is not an action: it’s just a typical reaction to change.

What about privacy?

Another prominent concern about algorithms seems to be “privacy”. Camille Sojit Pejcha, in her article, claims that

“the secret to [Tiktok’s] proprietary algorithm is a simple one: surveillance. By closely monitoring how long someone lingers on each video, the algorithm picks up cues as to what kind of content to serve up next.”

This little paragraph is based on a definition.

Does having an algorithm “watch” us count as “surveillance”?

Not necessarily.

If we defined surveillance as “having any machinery having data on our digital actions”, then all digital actions would be “under surveillance”. Such a definition would be useless: it would push us towards a deadlock where we confuse a solvable real issue (i.e. surveillance) with a commonplace situation (i.e. user data).

An algorithm should count as surveillance as long as our data is recorded and used by the company and other agents. The paragraph above talks about surveillance wihout presenting such a case. It is an excellent example of losing focus due to the mere hostility against the use of algorithms.

In short, many arguments against algorithms confuse potential misuse as actual use. This approach has two main problems:

  1. It contributes to the issue rather than addressing it. Not all algorithms are benign. Future issues will stem from new algorithms. Demonizing all of them outright makes efficient solutions almost impossible.
  2. It has no future. This approach takes algorithms as inevitably malevolent. Since they are here to stay for future technology, this approach does not contribute to any solutions for real issues.

____

[This article was first published on my Substack page “out of the water.]

--

--