How Twitter's Algorithm Powers Your Feed: Insights into Twitter's Inner Workings

Twitter's decision to release a significant portion of its source code will increase transparency and accountability on the platform. Read more to find out about the potential risks and benefits of this move.

How Twitter's Algorithm Powers Your Feed: Insights into Twitter's Inner Workings

Twitter, the popular social media platform, made a bold move recently by releasing a significant portion of its source code on the code-sharing site GitHub. This decision is uncommon for large social media companies, and owner Elon Musk believes it will make the platform more trustworthy. The code reveals how Twitter recommends posts and identifies problems such as hate speech, but it does not provide private user data or a roadmap for creating a replica of the platform.

Mr. Musk stated in a Twitter Spaces discussion shortly after the code's release that Twitter is trying to be the most trusted place on the internet. He added that there might be mistakes in the code, and people would find them. Although the average person may not make much sense of the code, programmers and others could parse it to see if Twitter treats certain types of users differently than others, as it has been accused of doing.

Former Twitter director Rumman Chowdhury, who oversaw a team responsible for machine-learning ethics, transparency, and accountability before she was laid off in November, stated that the code could potentially be used to game Twitter's system for recommending tweets, identifying rule violators, and more. For example, she said that while Twitter has rules around hate speech, it wasn’t apparent until now how it identifies such tweets beyond when others report them. “You can read this and extract what are the rules that govern how decisions are made. Now malicious actors may have ways to subvert the protections Twitter has built,” she added.

However, the exposed code shows how complex the platform is. “People think it’s really easy to re-create what a social-media company does, and it’s not,” Ms. Chowdhury said. Researchers and academics can audit Twitter’s content-recommendation algorithms. “There are entire conferences built around understanding recommendation systems and their impact,” she said.

Jonathan Stray, a senior scientist at the University of California at Berkeley’s Center for Human-Compatible Artificial Intelligence, believes that Twitter’s process for recommending tweets is built on standard architecture. “There are no surprises here,” he said. But he pointed out that the code shows the company’s formula for ranking tweets, and the biggest factor is whether a person is predicted to reply to a tweet.

“What they’re trying to produce is back-and-forth conversations, but that can also incentivize people to post sensational or divisive content,” said Mr. Stray.

In response to a request for comment, Twitter’s press email responded with a poop emoji, which Mr. Musk recently tweeted will be the company’s autoresponse for media inquiries.

During the Twitter Spaces discussion on Friday, someone asked Mr. Musk about a part of the code that appears to track when a tweet is made by him, which Twitter said was for the purpose of gathering metrics, according to the material released Friday. He responded, “I think it’s weird. This is the first time I’m learning this.” Mr. Musk later tweeted that Twitter will update the recommendation algorithm every 24 to 48 hours based on user suggestions.

In recent years, social media companies have faced scrutiny around their ability to influence what users see using their recommendation algorithms. When Musk acquired Twitter last year, he pledged to publish the code that Twitter uses to determine whether to promote certain tweets. Before buying Twitter, Musk accused the company of having a "strong left-wing bias" in its content moderation. Twitter's own researchers said in a 2021 report that its algorithms amplified accounts from the political right more than the left in several countries, including the US.

The release of Twitter's source code is a significant move for a social media company and one that owner Elon Musk believes will make the platform more trustworthy. The code, which was posted on the code-sharing site GitHub, exposes how Twitter recommends posts and identifies problems such as hate speech. However, it does not provide private user data or a roadmap for creating a replica of the platform.

According to Robin Burke, a professor of information science at the University of Colorado, Boulder, programmers and others could parse the code to see if Twitter treats certain types of users differently than others, as it has been accused of doing. They could use this information to determine if Twitter engages in discriminatory practices. However, the exposed code also shows how complex the platform is. Researchers and academics can now audit Twitter's content-recommendation algorithms. Burke added that people think it's easy to re-create what a social media company does, but it's not.

Burke said, "They could say, We’re not doing the discriminatory things people are accusing us of." on his Twitter post "On the other hand, discriminatory things could still happen inadvertently."

Twitter's process for recommending tweets is built on standard architecture, according to Jonathan Stray, a senior scientist at the University of California at Berkeley's Center for Human-Compatible Artificial Intelligence. However, the code does show the company's formula for ranking tweets, with the biggest factor being whether a person is predicted to reply to a tweet. While this can incentivize people to post sensational or divisive content, it also encourages back-and-forth conversations, which is what Twitter is trying to produce.

While Musk has pledged to make Twitter more transparent, the company, in some respects, shares less information about its operations since his takeover. As a private company, Twitter no longer publicly reports its financial results. Musk often uses his own Twitter account to share updates about the company.

The release of the source code is a part of Twitter's ongoing efforts to increase transparency, particularly around its recommendation algorithms. It allows researchers and other interested parties to better understand how the platform operates and to identify areas where it could be improved. However, as with any such move, there are potential risks associated with the release of the code.

One of the most significant risks is that the code could be used by malicious actors to subvert the protections that Twitter has built into its platform. For example, they could use the code to game Twitter's system for recommending tweets or identifying rule violators. As former Twitter director Rumman Chowdhury pointed out, the exposed code shows how complex the platform is, and it's not easy to recreate what a social media company does.

However, the benefits of the release of the source code outweigh the risks. By allowing researchers and other interested parties to better understand how the platform works, Twitter can identify areas where it could be improved and take steps to address those areas. It can also increase transparency and accountability, which are essential for building trust with users.

The release of the source code is just one part of Twitter's ongoing efforts to increase transparency and accountability. The company has also made several other moves in recent years, such as labeling misleading or false information and banning users who violate its policies. These moves have helped to increase trust in the platform and make it a more valuable resource for users.

In conclusion, Twitter's decision to release a significant portion of its source code is a significant move that will help to increase transparency and accountability on the platform. While there are potential risks associated with the release of the code, such as the possibility of malicious actors using it to subvert Twitter's protections or to game its recommendation system, the benefits of increased transparency outweigh the risks. The release of the code allows for researchers and academics to audit Twitter's content-recommendation algorithms, as well as identify any discriminatory practices that the platform may engage in.

Furthermore, the release of the code will help to increase trust in Twitter as a platform, which has been under scrutiny in recent years for its role in spreading misinformation and hate speech. With the rise of social media platforms, it has become increasingly important for users to know how algorithms work, especially when it comes to content recommendation. The release of the code will provide users with a better understanding of how Twitter works and may help to alleviate some of the concerns that users have about the platform.

It remains to be seen how the release of the code will impact Twitter's future. While Mr. Musk has pledged to make the platform more transparent, the company has shared less information about its operations since his takeover. As a private company, Twitter no longer publicly reports its financial results. Nevertheless, the release of the code is a step in the right direction, and it will be interesting to see how Twitter uses this move to improve the platform in the future.

ALSO READ l What Is ChatGPT? Key Things to Know About AI ChatBot

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies.