Live dead RSS

The author of the article, a journalist, talks about the shortcomings of RSS feeds and gives recommendations on how to improve this technology.



RSS is dead. Despite all the failures of Feedburner, Google Reader, Digg Reader last month and other RSS aggregators popular in recent years, this modest protocol, dying over and over, still continues to drag, despite countless evidence that it is dead, dead and dead again.

Now, in the light of the scandal about the leakage of Facebook users' data to a third-party company, Cambridge Analytica, a number of experts urge to resurrect RSS. Wired Brian Barrett recently said the following: “... anyone who is tired of the power of proprietary closed algorithms that control the content of online tapes can at least comfort themselves with the presence of a solution that has always been nearby, but has often been ignored by everyone. Tired of Twitter? Tired of Facebook? It's time to return to RSS. ”

One point should be clarified immediately: RSS will not come back to life, as now it is officially entering the “living dead” phase

And don't get me wrong: I love RSS. At its core, it represents the perfect embodiment of several remarkable, but difficult to put into practice, principles of the Internet, namely, transparency and openness. The protocol is really very simple and easy to read. It is very close to the old original format of the Internet with its static, full-text HTML articles. But perhaps his most important feature is decentralization: not one structure invested with this or that power is trying to put content in your face that you did not ask for.

RSS is the embodiment of great ideas, but the reality is that the protocol in its current form lacks the functionality required by almost all participants in the modern ecosystem of creating and consuming content. And therefore there are serious reasons to believe that his return in the foreseeable future is hardly possible.

Before delving into the details, immediately emphasize the difference between the RSS protocol and the RSS aggregator, that is, the programs that process the content according to the protocol. Some of the difficulties faced by this technology are solved at the aggregator level, and therefore rest only on the issues of proper software design. And at the same time, many of them should ultimately be decided at the protocol level.

Let's start with the users. I, as a journalist, love the opportunity to organize hundreds of RSS feeds in chronological order. This allows me to track absolutely all the stories relating to my area of ​​interest. However, this case is quite rare among users: not many of the readers of RSS feeds receive a salary for detailed news reports. Instead, most users need personalization and prioritization. First of all, they want to see in their tapes or streams only the most important content, since they, as a rule, do not have time to “digest” large amounts of information.

To get an idea of ​​what is at stake, try to subscribe to the main news RSS feed of a major newspaper, such as the Washington Post, which publishes about 1,200 items per day . No, really, do it and try to find among the whole heap of articles on fashion, style and nutrition the latest reports on the movement of troops in the Middle East.

Some sites try to circumvent this shortcoming by offering a selection of RSS feeds based on keywords. But here, too, the matter is complicated by the fact that several keywords are often assigned to each story, and the quality of their choice may differ enormously from site to site. As a result, some materials are caught in the tape several times, while others that are interesting to me may pass by the reader altogether.

Ultimately, all media use prioritization: every website, newspaper, radio or television channel has editors that define the hierarchy of information offered to users. And somehow it happened that RSS in its current incarnation never understood this. This mistake lies on the conscience of not only the aggregators, but also the protocol itself, which has never required publishers to signal the most or least important information.

Another challenge is identifying useful content and curating data. How will we find good RSS feeds? How to group, simplify and optimize them to structure information as efficiently as possible? Curation is one of the biggest obstacles to the growth of social networks such as Twitter and Reddit, restraining them from achieving transcendental numbers that Facebook can boast of. The problem of acquaintance with RSS from scratch is perhaps one of the most serious problems of the protocol for today, although it can be solved only through improvements at the software level of the aggregators, without the need to make changes to the protocol itself.

Anyway, the real shortcomings of RSS relate to the publishing side of the issue, and the most obvious one is the analyst. RSS does not allow publishers to track user behavior. Tape caching mechanisms by aggregators make attempts to track the number of subscribers almost impossible. No one knows how much time one user or another spends on reading an article, or even if he watched this article at all. In this sense, RSS faces the same problem as podcasting — user behavior remains a secret seal for content authors and publishers.

Some users regard the lack of analytics as a convenient feature protecting their privacy. The reality, however, is that the modern economy of Internet content is built around advertising, and even though I myself subscribe to all the things that interest me, such an economy is still far from prosperity and general distribution. Analytics increase advertising revenue, and this makes trackers and metrics a vital tool for companies looking to succeed in a highly competitive media environment.

RSS also offers very few opportunities for effective content branding. Given the great importance of brand capital for modern media, the loss of the logo, colors and fonts in the article turns into an effective way to devalue the business or enterprise to which the brand belongs. This problem is not only found in RSS, it has significantly reduced the demand for AMP projects from Google and Instant Articles from Facebook. Brands want users to know that it was they who wrote this or that material and they are not going to use technologies that exclude elements from user experience that they consider to be an integral part of their business.

All of these are just some of the problems of RSS as a product, and all together they ensure that the protocol never reaches the ubiquitous distribution needed to replace or crowd out similar solutions from centralized technology corporations. What, then, should we do if we want to get off the path of Facebook hegemony?

I believe that the solution to the problem lies in the need to introduce a whole set of improvements. RSS as a protocol needs to be such an enhancement that would allow to provide more data related to prioritization, as well as other signals, the presence of which is essential for improving efficiency at the aggregator level. This is a matter of updating not only the protocol, but all content management systems that allow RSS feeds to be published, since only in this way can users experience the advantage that the new functionality brings.

This in turn leads to the need to find a solution to another, perhaps the most difficult task - understanding RSS as a business model. Some commercial layer must appear on top of the tapes. This will create an incentive to further improve and optimize RSS as a service. I would love to pay for a subscription of Amazon Prime format, which allows me for reasonable money to get unlimited access to text feeds from several of the largest news sources. Such an approach would also improve the privacy situation.

Next, RSS aggregators should think carefully about marketing, attracting customers and assisting them at first. They should actively guide users in finding interesting content and help them to supervise their tapes with the help of algorithms (with settings that allow users like me to disable these algorithms). Such applications could be written in such a way that their machine learning models used to form the tape “lived” on the user's device, ensuring maximum protection of the reader’s personal data.

Do I believe that such a solution can be widely adopted? No, and it can be said with all certainty that its distribution in the decentralized format so desired for many is also impossible. I do not think that users are really worried about privacy (after all, Facebook has stolen their data for many years, but this did not stop its growth), and certainly they cannot be called news addicts. Nevertheless, the development of the correct business model may attract the attention of a sufficient number of readers, which in turn will lead to an increase in the interest of companies in the updated format. The last moment is vital for the successful launch of a fresh news economy and the return of RSS to life.

image

Source: https://habr.com/ru/post/411811/


All Articles