ConanXin
ConanXin

connect the dots.

Protocols, Not Platforms: A Technological Approach to Free Speech

Subtitle: Transforming the Internet's Economics and Digital Infrastructure to Promote Free Speech

Original: Protocols, Not Platforms: A Technological Approach to Free Speech (2019)

For the past 10 years or so, the widely held belief that the internet and social media could foster more speech and improve the marketplace of ideas has shifted dramatically over the past few years—now it seems that almost no one feels satisfy. Some argue that these platforms have become cesspools of malicious attacks, bigotry and hatred. At the same time, others argue that the platforms have become too aggressive in their regulatory language and are systematically suppressing or censoring certain views. That doesn't even touch on privacy issues and what these platforms do (or don't do) with all the data they collect.

This situation has created a level of crisis inside and outside these companies. While these companies have long claimed to be defenders of free speech, they have been grappling with their new status as arbiters of what is right and what is beautiful online. Meanwhile, politicians from both major parties have been attacking the companies, albeit for completely different reasons. Some have been complaining about how these platforms potentially allow foreign interference in our elections. Others have complained about how the platforms are being used to spread disinformation and propaganda. Some have accused the platforms of being too powerful. Others called attention to inappropriate account and content removals, while some argued that attempts to moderate were discriminatory against certain political views.

It is evident that there are no easy solutions to these challenges, and most of the solutions generally proposed often do not address real-world problems or understand the technical and societal challenges that may render these problems unsolvable.

Some have advocated tighter regulation of online content, and companies like Facebook, YouTube and Twitter have talked about hiring thousands of employees to build their moderation teams. On the other hand, companies are increasingly investing in more and more sophisticated technological help, such as artificial intelligence, to try to spot controversial content earlier in the process. Others argue that we should change Section 230 of the Communications Decency Act, which gives platforms the freedom to decide how to moderate (or not). Still others suggested that no moderation should be allowed at all—at least for platforms of a certain size—so that they are considered part of a public square.

As this article attempts to highlight, most of these solutions are not only infeasible; many of them make the original problem worse, or have other effects that are equally harmful.

This paper proposes a completely different approach—one that seems counterintuitive, but may actually offer a viable plan to make speech more free while minimizing the impact of phishing, hate speech, and mass disinformation efforts . As a bonus, it may also help users of these platforms regain control over their privacy. On top of that, it could even provide a whole new revenue stream for these platforms.

This approach: build protocols, not platforms.

To be clear, this approach will bring us back to what the internet once was. The early internet involved many different protocols — directives and standards that anyone could use to build compatible interfaces. Email uses SMTP (Simple Mail Transfer Protocol). Chat is done via IRC (Internet Relay Chat). As a distributed discussion system, Usenet uses NNTP (Network News Transfer Protocol). The World Wide Web itself is its own protocol: the HyperText Transfer Protocol (HTTP).

However, over the past few decades, the Internet has not built new protocols, but has grown around private, controlled platforms. These platforms can operate in a similar fashion to earlier protocols, but they are controlled by a single entity. This happens for a variety of reasons. Clearly, a single entity controlling a platform can profit from it. Additionally, having a single entity often means that new features, upgrades, bug fixes, etc. can be rolled out faster, increasing the user base.

In fact, some platforms today are leveraging existing open protocols but building walls around them to lock users in rather than just providing an interface. This actually underscores that there is no either-or choice between platforms and protocols, but rather a range. However, the argument presented here is that we need to move more towards a world of open protocols than platforms.

Moving to a world dominated by protocols rather than proprietary platforms will solve many of the problems facing the internet today. Rather than relying on a few giant platforms to govern online speech, there is a broad competition in which anyone can design their own interfaces, filters, and add-ons to make the most effective services successful without having to worry about some Sound for a thorough review. It would allow end users to decide their own tolerance for different types of speech, but make it easier for most to avoid the most problematic speech without completely shutting anyone down or letting the platforms themselves decide who can speak.

In short, it will push power and decision-making to the ends of the network, rather than having it concentrated in a small group of very powerful companies.

At the same time, it could lead to new, more innovative features and greater control over their own data by end users. Finally, it could help usher in a new set of business models that don't just focus on monetizing user data.

Historically, the Internet has increasingly moved to a world of centralized platforms rather than decentralized protocols, in part because of the incentive structures under the old Internet. Protocols are hard to make money. So it's hard to keep up to date and deliver new features in a convincing way. Companies often come in and "take over", creating a more centralized platform, adding their own functionality (and integrating their own business models). They were able to dedicate more resources to these platforms (and business models), creating a virtuous cycle (and some locked-in users) for the platforms.

However, this also brings its own difficulties. With control comes requirements for accountability, including tighter regulation of content hosted on these platforms. It also raised concerns about filter bubbles and bias. Plus, it has given some internet companies dominance, which (fairly justifiably) upsets many.

A renewed focus on the protocols on the platform can address many of these issues. Other recent developments suggest that doing so can overcome many of the early shortcomings of protocol-based systems, with good outcomes: useful internet services, competition drives innovation, not entirely controlled by large corporations, but financially sustainable , giving end users more control over their own data and privacy, with far less chance of providing false and false information.

Early protocol issues and what the platform did well

While the early internet was dominated by a set of protocols rather than platforms, the limitations of these early protocols show why platforms would dominate. There are many different platforms, each with their own set of reasons why they succeeded and failed (or failed) over time, but to help illustrate the issues discussed here, we'll limit the comparison to Usenet and Reddit superior.

Conceptually, both Usenet and Reddit are similar. Both involve a series of forums generally organized around a specific topic. On Usenet, these are called newsgroups. On Reddit, they are subreddits. Every newsgroup or subreddit has moderators who have the power to make different rules. Users can post new posts in each group to get threaded replies from others in the group, creating a discussion-like effect.

However, Usenet is an open protocol (Network News Transfer Protocol, NNTP) that anyone can access using a variety of different applications. Reddit is a centralized platform controlled entirely by one company.

To access Usenet, you initially need a special newsreader client application, and then you need to access the Usenet server. Many ISPs initially offered their own services (when I first went online in 1993, I used Usenet through a news server at the university and a University-provided Usenet reader). As the web became more popular, more and more organizations attempted to provide a web front-end to Usenet. In the early days, the space was dominated by the Deja News research service, which provided the earliest web interface to Usenet; it has since added a host of additional features, including (most useful) a comprehensive search engine.

While Deja News tried a variety of different business models, its search was eventually shut down. Google acquired the company in 2001, including its Usenet archive, as a key part of Google Groups (Google Groups still provides email-style mailing lists unique to Google platforms, as well as most Usenet and its newsgroups). web interface).

Much of Usenet was complex and unclear (especially before web interfaces became widespread). An early joke on Usenet was that each September the Usenet service was filled with bewildered "newcomers," necessarily fresh college freshmen who had just gotten a new account and had little knowledge of common practices and proper etiquette for using the service. . As such, September tends to be the time when a lot of old-timers find themselves frustratedly "correcting" the behavior of these new entrants until they meet the norms of the system.

In the same spirit, the period after September 1993 is remembered by old-school Usenet fans as "the September that never ended" and "Eternal September" . That's when the proprietary platform America Online (AOL) opened its doors to Usenet, leading to an influx of untamed users.

Since there are many different Usenet servers, the content is not hosted centrally, but spreads across different servers. This has advantages and disadvantages, including that different servers can handle different content in different ways. Not every Usenet server has to host every group. But it also means that there is no central authority to deal with disruptive or malicious activity. However, some servers may choose to block certain newsgroups, and end users can use tools such as kill files to filter out a variety of unwanted content based on criteria chosen by the user.

Another major disadvantage of the original Usenet was that it was not particularly adaptable or flexible, especially when it came to large-scale changes. Since it is a decentralized set of protocols, there is an involved consensus process that requires the consent of a broad range of parties before any modifications to the protocol are made. Even minor changes tend to require a lot of work, and even then, they are not always universally accepted. Creating a new newsgroup (newsgroup) is a fairly complex process. For some tiers, there is an approval process, but other "alt" categories are easier to set up (though there's no guarantee that all Usenet servers will carry the board). By contrast, setting up a new subreddit is easy. Reddit has a product and engineering team that can make any changes it wants — but the user base has little say in how those changes happen.

Perhaps the biggest problem with the old system was the lack of a clear business model. As the death of Deja News showed, running a Usenet server was never particularly lucrative. Over time, there have been more and more "professional" Usenet servers that require payment to access, but these servers tend to appear later, are not as large as Internet platforms like Reddit, and are generally considered to be focused on dealing in infringing content .

The current problems faced by large platforms

Over the past two decades, the rise of internet platforms — Facebook, Twitter, YouTube, Reddit, and others — has more or less replaced the previously used protocol-based systems. With these platforms, there is a (usually for-profit) company serving the end user. Funding for these services tends to come first from venture capital and then from advertising (often highly targeted).

These platforms are built on the World Wide Web and are often accessed through traditional Internet web browsers or, increasingly, mobile device applications. The benefits of building a service as a platform are obvious: the owner has ultimate control over the platform and is therefore better able to monetize the platform through some form of advertising (or other ancillary service). However, it does incentivize these platforms to capture more and more data from their users to better target them.

This has led to legitimate concerns and resistance from users and regulators who are concerned that platforms are not operating fairly or properly "protecting" the end-user data they have been collecting.

The second problem facing the biggest platforms today is that as platforms get bigger and more at the center of everyday life, there are growing concerns about what the operators of these platforms are able to publish, and what those operators are able to publish. possible responsibilities for regulating or blocking such content. They face increasing pressure from users and politicians to be more proactive in moderating this content. In some cases, laws have been passed that more explicitly require platforms to remove certain content, slowly eroding the early immunity that many platforms enjoyed over their moderation options (e.g., Section 230 of the U.S. Communications Decency Act). , or the European Union's Electronic Commerce Directive).

Therefore, platforms have reason to feel that they must not only be more proactive, but also have to testify before various legislative bodies, hire thousands of employees as potential content moderators, and invest heavily in moderation technology. However, even with these regulatory requirements and human technology investments, it's still unclear whether any platform can really "do" content moderation at scale.

Part of the problem is that any platform-moderation decision will upset some people. Obviously, those whose content is moderated tend to be unhappy about it, but so are others who wish to see or share it. At the same time, in many cases, the decision not to moderate content can also be unpleasant. These platforms are currently receiving considerable criticism for their moderation choices, including accusations (mostly unsubstantiated, of course) that political bias is driving these content moderation choices. As platforms come under pressure to take on more responsibility, every choice they make about content moderation puts them in a bind. Remove controversial content - anger those who created or support it; don't remove controversial content - anger those who find it problematic.

This puts these platforms in a no-win situation. They could continue to spend more and more money on the issue and continue to have conversations with the public and politicians, but it's unclear if such an outcome would be "satisfying" for enough people. On any given day, it's not hard to see that when platforms like Facebook, Twitter, and YouTube fail to remove certain content, there are people who are unhappy with them; when they do eventually remove that content, those people can be immediately caught off guard by those who are unhappy with the platform replaced by people.

This setup frustrates everyone and is unlikely to improve anytime soon.

rescue protocol

In this post, I propose that we return to a world where protocols dominate the internet, not platforms. There is reason to believe that moving to a protocol system can solve many of the problems associated with platforms today, and can do so while minimizing the problems inherent in protocols decades ago.

While there is no silver bullet, protocol systems can better protect user privacy and free speech, while minimizing the impact of online abuse and creating new, compelling business models that are more aligned with users’ interests.

The key to making this happen is that while the various types of platforms we see today have specific protocols, there will be many competing implementations of protocol interfaces. Competition will come from these implementations. The lower cost of switching from one implementation to another will reduce lock-in, and anyone can create their own interface and have access to all content and users on the underlying protocol, making the barrier to entry much lower. If you already have access to all the people who use the "social network protocol" and just provide a different, or better interface, then you don't need to build a whole new Facebook.

To some extent, examples of this can already be seen in the field of email. Built on open standards such as SMTP, POP3, and IMAP, there are many different implementations of email. Email systems popular in the 1980s and 1990s relied on client-server setups where service providers (whether commercial ISPs, universities, or employers) hosted emails on servers only briefly until they passed through some customers Client software (such as Microsoft Outlook, Eudora or Thunderbird) is downloaded to the user's own computer. Alternatively, the user can access the email through a text interface such as Pine or Elm.

The late 1990s saw the rise of web-based email, first with Rocketmail (which was eventually acquired by Yahoo and became Yahoo Mail) and Hotmail (which was acquired by Microsoft and became Outlook years later). Google launched its own product, Gmail, in 2004, which kicked off a new wave of innovation as Gmail offered greater storage space for email as well as a faster user interface.

However, thanks to these open standards, there is a lot of flexibility. Users can use non-Gmail email addresses in the Gmail interface. Or he or she can use a Gmail account in a completely different client, such as Microsoft Outlook or Apple Mail. On top of that, new interfaces can be created on top of Gmail itself, such as using Chrome extensions.

This setup has many benefits for the end user. Even though platforms like Gmail are more popular in the market, switching costs are much lower. If a user doesn't like the way Gmail handles certain features, or is concerned about Google's privacy practices, it's much easier to switch to another platform, and the user doesn't lose access to all of their old contacts, nor to anyone else The ability for people to email (even those contacts who are still Gmail users).

Note that this flexibility is a powerful incentive on Google's part to ensure that Gmail treats users well; Google is unlikely to take action that could lead to a rapid exodus of users. This is unlike completely proprietary platforms like Facebook or Twitter, where leaving them means you no longer interact with the people there in the same way, nor can you easily access their content and communications. With a system like Gmail, it's easy to export contacts or even old emails and just start over from a different service without losing the ability to stay in touch with anyone.

In addition, it makes the playing field more open. While Gmail is a particularly popular email service, other companies have been able to build significant email services like Outlook.com or Yahoo Mail, or create successful start-up email services targeting different markets and ecosystems , such as Zohomail or Protonmail. It also opens the door for other services to build on top of the existing email ecosystem without fear of being locked out by relying on a single platform. For example, Twitter and Facebook both have a tendency to switch product directions and cut off third-party apps, but in email, there is a booming market of services and companies such as Boomerang, SaneBox, and MixMax, each offering additional services, Can run on a variety of different email platforms.

The end result is that there is more competition to make the service better, both among and within email services, and at the same time, the major service providers act in the best interests of the users because a significant reduction in The level of lock-in allows these users to opt out.

Protect freedom of speech, but limit the impact of abusive behavior

Perhaps the most contentious part of the discussion about content moderation is how "abusive" behavior is handled. Almost everyone recognizes that there is such behavior online, and it can be disruptive, but there is no agreement on what exactly it includes. Worrying behavior can fall into many different categories, from harassment to hate speech, to threats, to phishing, to obscene, to human flesh searches, to spam, and more. But none of these categories has a comprehensive definition and is mostly an outsider's view. For example, a person attempting to express an opinion strongly may be seen as harassment by the recipient. There may be nothing "wrong" on either side, but having each platform adjudicate something like this is an impossible task, especially when dealing with hundreds of millions of pieces of content every day.

Currently, platforms are the ultimate centralized authority on these issues. Many platforms address this with increasingly complex internal "legal" bodies (whose "rules" are often opaque to end users), and then hand these "laws" to a large number of employees (often outsourced, with relatively low salaries) low), they have little time to judge thousands of pieces of content.

Under such a system, Type 1 ("false positive") and Type 2 ("false negative") errors are not only common; they are inevitable. What most people think should be removed is kept, and what many think should be kept is removed. Multiple content moderators may see content from completely different perspectives, and it is nearly impossible for content moderators to consider context (partly because a lot of context may not be available or obvious to them, and partly because it is necessary to fully investigate each situation). time makes it impossible to do it cheaply). Likewise, no technical solution can properly take context or intent into account—computers cannot recognize things like sarcasm or hyperbole, even at a level that would be obvious to any human reader.

However, protocol-based systems move most of the decision-making from the center to both ends of the network. Instead of relying on a single centralized platform, and all the internal biases and incentives that arise from it, anyone can create their own set of rules for what they don’t want to see and what they want to promote. Since most people don't want to manually control all their own preferences and levels, this can easily fall to any third party, whether they are competing platforms, nonprofits, or local communities. These third parties can create any interface they want, any rules.

For example, those interested in civil liberties issues might subscribe to moderation filters or even add-ons published by the American Civil Liberties Union or the Electronic Frontier Foundation. People deeply involved in politics may choose a filter from their designated party (although this obviously raises some concerns about an increase in "filter bubbles", as we will see, there is reason to believe that the impact of these things will be limited).

Brand new third parties that focus entirely on delivering a better experience may emerge. This needs to revolve not just around content moderation filters, but around the entire user experience. Imagine Twitter with a competing interface that can be pre-set (and constantly updated) to remove content from malicious accounts and better promote more thoughtful, thought-provoking stories than traditional clickbait trending topics. Or the interface could provide a better layout for conversations or news reading.

The key is to ensure that the "rules" are not only sharable, but fully transparent and controlled by any end user. So I might opt to use the Twitter controls exposed by the Electronic Frontier Foundation, using an interface provided by a new non-profit, but I can tweak the settings if I like more about the EU. Or if I wanted to primarily use the web to read news, I might use the interface provided by The New York Times. Or, if I want to chat with friends, I can use a special interface designed for better communication between small groups of friends.

In such a world, we could have a million content moderation systems use the same common library of content -- each using a completely different approach -- and see which works best. A centralized platform is no longer a single-source arbiter of what is allowed and what is not. Instead, many different individuals and organizations will be able to adapt the system to their own comfort level and share it with others, allowing competition to take place at the implementation layer rather than the underlying social network layer.

This won't completely prevent anyone from using the platform to speak, but if the more popular interface and content moderation filters completely voluntarily opt out of them, the power and impact of their speech will be more limited. This suggests a more democratic approach to allowing the market for filters to compete. If people feel that such an interface or filter provider is not doing a good job, they can move to another interface or adjust the settings themselves.

As a result, we have less central control, less reason to claim "censorship", more competition, more diverse approaches, and more control for the end user - while potentially minimizing abuse that many find the scope and impact of the content. In fact, the existence of a variety of different filter options can change the reach of any individual in direct proportion to how questionable many people think that individual's speech is.

For example, there has been huge controversy over how the platform handles Alex Jones' account. He's an entertainer, runs InfoWars, and often supports various conspiracy theories. Users put enormous pressure on the platform to cut off his account, and when the platform finally cut him off, it was met with a corresponding pushback from his supporters, who claimed that they chose to sever him simply because they were biased against his politics. He was removed from the platform.

In a protocol-based system, those who consistently believed that Alex Jones was not an honest actor might block him earlier, while other interface providers, filter providers, and individuals could Make a decision to intervene based on any particularly egregious behavior. While his staunchest supporters may never cut him off, his overall reach will be limited. So those who don't want to be bothered by his gibberish don't have to deal with it; those who want to see it can still access it.

A marketplace of many different filters and interfaces (and the ability to customize your own) will enable greater granularity. Conspiracy theorists and trolls are harder to spot on "mainstream" filters, but they won't be completely silent on those who want to hear them. Under today's central system, where all voices are more or less equal (or completely banned), extremist views are unlikely to gain mainstream traction in a protocol-focused world.

Protect user data and privacy

One benefit of this is that a protocol-based system will almost certainly increase our privacy. Under such a system, a social media-style system would not need to collect and host all of your data. Conversely, just as filtering decisions can be moved to the terminal, data storage can also be moved to the terminal. While this could develop in many different ways, a fairly simple approach is for end users to simply build their own "data store" through the applications they control. Since we're unlikely to return to a world where most people store their data locally (especially since we increasingly use multiple devices, including computers, smartphones, and tablets), hosting this data in the cloud remains an option Makes sense, but this data can be completely controlled by the end user.

In such a world, you might use a dedicated data storage company to host your data in the cloud as encrypted blobs that the data storage provider can't access, but that you can choose to do at any given moment. Access this data for any necessary purpose. This data can also serve as your unique identity. Then, if you want to use a Twitter-like protocol, you can simply open access to your database for the Twitter-like protocol to access the necessary content. You can set what it allows (and doesn't) access, and you'll also be able to see when and how it accesses your data, and what it does with it. This means that if someone abuses access, you can cut it off at any time. In some cases, systems can be designed such that even if a service accesses your data, it cannot collect your specific data and can only receive aggregator or digest information in hash format, allowing for an extra layer of privacy.

This way, end users can still leverage their data to use various social media tools, but rather than locking this data in opaque silos with no access, transparency, and control, control is fully transferred to the end user . Intermediaries are incentivized to act best and avoid being cut off. The end user can gain a better understanding of how his or her data is actually being used, and it improves the ability to register with other services, and even securely pass data from one entity to another entity (or entities), thereby enabling Implemented powerful new features.

While some fear that under such a system, various intermediaries will still focus on absorbing all your data, this is not the case for several reasons. First, any provider that becomes too "greedy" for your data runs the risk of being objectionable, given that it's possible to use the same protocol and switch to a different interface/filter provider. Second, by separating data storage from interface providers, there is greater transparency for end users. The idea is that you will store your data in a data storage/cloud service in an encrypted format so that the hosting party cannot access it. Interface providers need to request access and can develop tools and services that enable you to (1) determine which data platforms will be allowed access, for how long, and for what reasons; (2) if you feel comfortable with how the data will be used Uncomfortable and can cut off this access.

While it is possible for interface/filter operators to abuse their privileges to collect and retain your data, there are also potential technical means to address this, including designing protocols so that only relevant data can be extracted in real-time from your data store. If it doesn't, and is accessing its own data store (where your data is stored), it may trigger a warning that your data is being stored against your will.

Finally, as explained below when discussing business models, there will be stronger incentives for interface providers to respect the privacy wishes of end users, as their money is likely to be driven more directly by usage rather than through data monetization . Disrupting your user base can cause them to flee, harming the interface provider's own financial interests.

enable greater innovation

Protocol systems, by their very nature, are likely to bring more innovation in this area, in part by allowing anyone to create an interface to access this content. This level of competition will almost certainly lead to various innovative attempts to improve every aspect of service. Competing services can offer better filters, better interfaces, better or different features, and more.

Right now, we only have inter-platform competition, which happens to a certain extent, but quite limited. It's clear that the market can accept several giants, so while Facebook, Twitter, YouTube, Instagram, and a few others may compete for users' attention here and there, there is less incentive to improve their own services.

However, if anyone can show a new interface, or new features, or better typography, the competition in a particular protocol (formerly a platform) can quickly become fierce. All sorts of ideas may be tried and thrown away, but real-world labs are likely to show how these services can innovate in a short period of time and deliver more value at a faster rate. Currently, many platforms provide application programming interfaces (APIs) that allow third parties to develop new interfaces, but these APIs are controlled by a central platform and they can change them at will. In fact, Twitter has changed support for APIs and 3rd party developers several times - but under a protocol system the APIs will be open and hopefully anyone can build on top of that, and There won't be a central company to cut developers off.

Beyond that, it could potentially create entirely new areas of innovation, including ancillary services such as parties that focus on providing better content moderation tools, or the previously discussed competing databanks, which are used only for Managed access to encrypted data without having to access it or perform any specific action on it. These services may compete on speed and uptime, not on additional features.

For example, in a world of open protocols and private data stores, it is possible to develop a thriving business in the form of "agents" that interface between your data store and various services to automate certain tasks and provide additional value. A simple version could be an agent that focuses on scanning various protocols and services for relevant news on a particular topic or company, and then sending you a message as soon as it finds anything.

Create new business models

One of the main reasons for the fading out of early internet protocols compared to centralized platforms was business model issues. Having your own platform (if it catches on) has become a model that seems to bring a lot of profits to the company. However, building and maintaining a protocol has long been a difficult problem. Most of the work is usually done by volunteers, and over time the protocol shrinks without noticing. For example, OpenSSL, a critical security protocol on which a large portion of the Internet relies, was found to have a major security flaw in 2014, dubbed Heartbleed. Around this time, people noticed an almost complete lack of support for OpenSSL. There is a loose group of volunteers and a full-time staff working on OpenSSL. The foundation that has historically managed it has received relatively little funding.

There are many such stories. As mentioned earlier, Deja News couldn't build much of a business out of Usenet, so it was sold to Google. Email has never been seen as a money-making protocol, it's usually included in your ISP account for free. Some early companies tried to build web platforms around email, but two important products were quickly acquired by larger companies (Yahoo's Rocketmail, Microsoft's Hotmail) to be integrated into larger products. Eventually Google launched Gmail and brought email to its own platform, but it was rarely seen as a huge revenue driver. Still, the successes Google and Microsoft have had with Gmail and Outlook, respectively, show that large companies can build very successful services based on open protocols. If Google really screwed up Gmail, or did something questionable with the service, it's easy for people to move to a different email system and keep access to everyone they communicate with.

We've discussed competition between various interface and filter implementations to provide better service, but there may also be competition for business models. There may be different types of business model experiments involving data storage services that may charge for premium access and storage (and security), as with services like Dropbox and Amazon Web Services. There may also be a variety of different business models around implementations and filters. There may be subscription services for premium services or features, as well as other payment methods.

While there are fairly legitimate concerns about the data-monitoring mechanisms of the current advertising market on social media platforms, there is reason to believe that a less data-intensive advertising model could flourish in the world described here. Likewise, as data and privacy levels are in the hands of end users, collecting all data more aggressively will no longer be feasible or useful. Instead, several different types of advertising models may emerge.

First, there may be an advertising model based on more limited data that is more focused on matching intent or pure brand advertising. To understand this possibility, look back at Google's original advertising model, which relied less on knowing everything about you than on knowing what you were searching for on the internet at that particular moment. Or, we could go back to a more traditional world of brand advertising, where local advertisers would seek out the right community. For example, a car company would advertise within a micro-community on a platform looking for people interested in cars.

Alternatively, given the degree of control end users have over their data, a reverse auction type business model could be developed, where end users themselves can provide their own data in exchange for access to certain advertisers or trading rights. The point is that the end user - not the platform - will be in control.

Perhaps most interestingly, there are potential new opportunities through which the protocol may actually be more sustainable. With the development of cryptocurrencies and tokens over the past few years, it is theoretically possible to build a protocol that uses cryptocurrencies or tokens with added value, and the value of these items grows with usage. In simple terms, a token-based cryptocurrency is the equivalent of equity in a company, but the value is not tied to the financial success of the company, but the value of the crypto token is tied to the value of the entire network.

Without a deep understanding of how these work, these forms of money have their own value attached to the protocols they support. As more people use the protocol, the value of the currency or token itself increases. In many cases, the use of a currency or token may be necessary to run the protocol itself - thus, as the protocol is more widely used, the demand for the currency/token increases while the supply remains the same or along the lines of previous Designed growth plan extension.

This has prompted more people to support and use the protocol to increase the value of the associated currency. There are now attempts to create protocols where the organization in charge of the protocol keeps a certain percentage of the currency while distributing the rest. In theory, under such a system, if token/currency appreciation can catch on, then token/currency appreciation will help fund the ongoing maintenance and operation of the protocol - effectively eliminating the need to help create a modern The historical funding problem of the Internet's open protocols.

Likewise, various implementers of interfaces, filters or proxies can benefit from the token's appreciation. Different models may result, but different implementations can earn a specific share of tokens, and as they help the network increase usage, their own token value will also increase. In fact, the distribution of tokens can be tied to the number of users within a particular interface to create consistent incentives (although some mechanisms are used to avoid fake users in the system). Alternatively, as mentioned above, the use of tokens can be an integral part of the actual architectural functioning of the system, just as Bitcoin is a key part of its open blockchain ledger functionality.

In many ways, this setup better aligns the interests of service users with those of protocol developers and interface designers. In platform-based systems, the incentives are either to charge users directly (making the interests of the platform and users somewhat conflicting) or to collect more data to advertise to them. In theory, "good" advertising may be seen as valuable to end users, but in most cases, when platforms collect so much data with the intent of serving ads to end users, end users feel that platforms and The interests of users are often misplaced.

However, under a tokenized system, the key driver is to gain more usage in order to increase the value of the token. Obviously, this may present other incentive challenges - there are already concerns that platforms will take up too much time, and any service will face challenges when it grows too large - but again, the protocol will encourage competition to provide better user interfaces, Better functionality and better moderation to minimize this challenge. In fact, an interface can compete by providing a more limited experience and improving its ability to limit information overload.

However, the ability to align the incentives of the network itself with economic interests creates a rather unique opportunity that is now being explored by many.

where it may not apply

This is not to say that a protocol-based system can ultimately solve all problems. Much of the advice above is speculative - in fact, we've seen that, historically, platforms have outgrown protocols with limited ability to thrive.

complexity kills

Any protocol-based system has the potential to be too complex and cumbersome to attract a sufficiently large user base. Users don't want to fiddle with tons of settings or different apps just to work. They just want to figure out what the service is and be able to use it easily. Platforms have historically been pretty good at focusing on the user experience aspect, especially around onboarding new users.

If we're going to try a new protocol-based system, we hope it can and will learn from the success of the current platform and build on it. Likewise, intra-protocol competition at the service level may create greater incentives for creating a better user experience — as does the value of the associated cryptocurrency, whose value is actually tied to creating a better user experience. In fact, providing the easiest and most user-friendly interface for access protocols may be a key area of competition.

Finally, historically, one of the reasons platforms have won is that everything is controlled by a single entity, which also brings some noticeable performance gains. In a world of protocols with separate data stores/interfaces, you'd be more reliant on multiple companies connecting together without lag. Internet giants such as Google, Facebook, and Amazon have indeed perfected their systems to work together seamlessly, and bringing in multiple third parties would pose a greater risk. However, there have been extensive technological advancements in this area (indeed, large platform companies have open sourced some of their own technologies to enable this). On top of that, broadband speeds have increased and should continue to increase, potentially minimizing this possible technological hurdle.

Existing platforms are too big and will never change

Another potential stumbling block is that existing platforms — Facebook, YouTube, Twitter, Reddit, etc. — are already so large and entrenched that it is nearly impossible to replace them with a protocol-based approach. This criticism argues that the only way to achieve this is to build an entirely new, protocol-dependent system. This may work, but the platform itself could also consider using the protocol.

To the idea that platforms could do this on their own, many responded by asking why they would do it, as it would inevitably mean getting rid of their current monopoly control over the information in the system and allowing that data to go back to the end user under the control of and using the same protocol for competing services. However, there are several reasons to think that some platforms might actually be willing to accept the trade-off.

First, as these platforms come under increasing pressure, they increasingly need to admit that what they’re doing now doesn’t work, and it won’t work forever. The current mode of operation will only lead to increasing pressure to "fix" seemingly unsolvable problems. In a way, moving to a protocol system could be a way for existing platforms to relieve themselves of the impossible burden of managing what everyone on the platform is doing.

Second, continuing to do what they are doing will pay more and more. Facebook has recently committed to hiring 10,000 more moderators; YouTube has also committed to hiring "thousands" of moderators. Hiring all of these people will also be an increasing cost for these companies. Switching to a protocol-based system would shift the auditing element to the ends of the network or competing third parties, reducing fees for large platforms.

Third, existing platforms can explore using the protocol as an efficient way to compete with other large internet platforms. For example, Google has tried several times to build a Facebook-style social network, but it has all failed. However, if it continues to believe that there should be an alternative to Facebook, it may recognize the appeal of offering a system based on an open protocol. In fact, recognizing that it is unlikely to build its own proprietary solution would make offering an open protocol system an attractive alternative, if only to undercut Facebook.

Finally, if the token/cryptocurrency approach proves to be a way to support successful protocols, it may even be more valuable to build these services as protocols rather than as centralized, controlled platforms.

This will exacerbate the filter foam problem

Some argue that this approach can actually exacerbate some of the problems associated with abusive content online. They argue that allowing individuals who abuse their power—whether sheer trolls or horrific neo-Nazis—have any ability to express themselves will be a problem. Going a step further, they'll say that if you allow competing services, you end up in the cesspool area of the internet, where the worst continues to congregate unhindered.

While I agree with the possibility, it doesn't seem inevitable by any means. One point against this complaint is that these people have flooded various social networks with no success so far in weeding them out. But more importantly, this may isolate them to some extent, since their content is unlikely to make it into the most widely used implementations and services on the protocol. That said, while they can become evil in their own dark corners, their ability to infect the rest of the internet and (importantly) to find and recruit others will be severely limited.

To some extent, we have seen this. While forced to congregate in their own corners of the internet after being ostracized by sites like Facebook and Twitter, alternative services dedicated to these users have not been particularly successful in scaling or growing over time. There's always going to be some people with crazy ideas -- but giving themselves a little room for madness might better protect the wider internet than keep kicking them out of other platforms.

Handle more objectively questionable content

One of the key assumptions is that much of the "objectionable" content that's causing trouble here is in a broad "gray" range, rather than "black and white." However, there are some elements - often ones that violate various laws - that are much clearer and out of scope. There are legitimate concerns that such a setup would allow communities to form around child pornography, revenge porn, stalking, human flesh searches, or other criminal activity.

The reality, of course, is that these types of communities have formed -- often on the dark web -- and the way they are dealt with today is largely through law enforcement (and sometimes investigative reporting). In this case, the same is likely to happen as well. There is little reason to think that, in a protocol-centric world, this problem would be any different from the one that exists today.

Also, under an open protocol system, there would actually be greater transparency, and some (like civil society or law enforcement agencies monitoring hate groups) could even establish and deploy agents that monitor these spaces and be able to signal needs more directly Supervised notification of particularly vile comments. Rather than having to track down the stalker directly, the wider protocol may be scanned using a digital agent to determine if there is anything that suggests concern, and then the police or other relevant contacts will be notified directly.

Examples in Action/Situations in Practice

As mentioned above, this can happen in a number of ways. Existing services may find that the burden of being a centralized platform becomes too expensive, so finding an alternative model - a tokenized/cryptocurrency approach may even make it financially viable.

Alternatively, new protocols can be created to achieve this. There have been many attempts at different levels. Services like IPFS (InterPlanetary File System) and its related product Filecoin have already laid the foundation and infrastructure for distributed services based on its protocol and currency. The inventor of the World Wide Web, Tim Berners-Lee, has been working on a system called Solid, from his new company Inrupt, that will help enable a more distributed internet . Other projects, like Indieweb, have been rallying people to build many pieces that can contribute to the world of future protocols rather than platforms.

In either case, if a protocol is proposed and starts to gain traction, we would expect to see a few key things: multiple implementations/services on the same protocol, giving users the choice of which service to use, rather than putting They are limited to one service. We may also start to see the rise of new lines of business involving secure data storage/data storage, as users will no longer give their data to the platform for free, but have more control. Other new services and opportunities are likely to emerge as a result, especially in terms of creating a better set of services for users, and the competition will become more and more intense.

in conclusion

Over the past half century of network computing, the relationship between client-side and server-side computing has been rocky. We go from mainframes and dumb terminals to powerful desktop computers to web applications and cloud computing. Maybe we'll start to see a similar swing in this area as well. We have gone from a protocol-led world to a world where centralized platforms control everything. Let’s go back to a world of protocol-dominated platforms that could have huge benefits for free speech and innovation online.

This move has the potential to bring us back to the early promises of the web: to create a place where like-minded people can connect on a variety of topics around the world, and anyone can find useful information on a variety of different topics without Contaminated by abuse and false information. At the same time, it could foster greater competition and innovation on the internet, while also giving end users more control over their own data and preventing large corporations from owning too much data on any given user.

Moving to protocols, not platforms, is one way of free speech in the 21st century. Rather than relying on a “marketplace of ideas” within a single platform—which can be hijacked by malicious actors—a protocol can foster an ideal market in which competition occurs to provide more Good service that minimizes the impact of malicious users without completely cutting off their ability to speak.

This would be a radical change, but one that should be taken seriously.

CC BY-NC-ND 2.0

Like my work?
Don't forget to support or like, so I know you are with me..

was the first to support this article
Loading...
Loading...

Comment