X's 'open source' algorithm isn't a win for transparency, researchers say

Recommendation Algorithm News

X's 'open source' algorithm isn't a win for transparency, researchers say
Social Media CompaniesJohn ThickstunAlgorithm
  • 📰 engadget
  • ⏱ Reading Time:
  • 260 sec. here
  • 10 min. at publisher
  • 📊 Quality Score:
  • News: 121%
  • Publisher: 63%

Find the latest technology news and expert tech product reviews. Learn about the latest gadgets and consumer tech products for entertainment, gaming, lifestyle and more.

that powers the platform's "for you" algorithm last month, Elon Musk said the move was a victory for transparency. "We know the algorithm is dumb and needs massive improvements, but at least you can see us struggle to make it better in real-time and with transparency," MuskWhile it's true that X is the only major social network to make elements of its recommendation algorithm open source, researchers say that what the company has published doesn't offer the kind of transparency that would actually be useful for anyone trying to understand how X works in 2026.

, is a "redacted" version of X's algorithm, according to John Thickstun, an assistant professor of computer science at Cornell University. "What troubles me about these releases is that they give you a pretense that they're being transparent for releasing code and the sense that someone might be able to use this release to do some kind of auditing work or oversight work," Thickstun told Engadget. "And the fact is that that's not really possible at all."Predictably, as soon as the code was released, users on X began posting lengthy threads about what it means for creators hoping to boost their visibility on the platform. For example,that was viewed more than 350,000 times advises users that X "will reward people who conversate" and "raise the vibrations of X." Another post with more than 20,000 views claims thatsays that users should stick to their "niche" because "topic switching hurts your reach." But Thickstun cautioned against reading too much into supposed strategies for going viral. "They can't possibly draw those conclusions from what was released," he says. While there are some small details that shed light on how X recommends posts — for example, it filters out content that's more than a day old — Thickstun says that much of it is "not actionable" for content creators. Structurally, one of the biggest differences between the current algorithm and the version released in 2023 is that the new system relies on a Grok-like large language model to rank posts. "In the previous version, this was hard coded: you took how many times something was liked, how many times something was shared, how many times something was replied … and then based on that you calculate a score, and then you rank the post based on the score," explains Ruggero Lazzaroni, a pHD researcher at the University of Graz. "Now the score is derived not by the real amounts of likes and shares, but by how likely Grok thinks that you would like and share a post." That also makes the algorithm even more opaque than it was before, says Thickstun. "So much more of the decisionmaking … is happening within black box neural networks that they're training on their data," he says. "More and more of the decisionmaking power of these algorithms is shifting not just out of public view, but actually really out of view or understanding of even the internal engineers that are working on these systems, because they're being shifted into these neural networks."The release has even less detail about some aspects of the algorithm that were made public in 2023. At the time, the company included information about how it weighted various interactions to determine which posts should rank higher. For example, a reply was "worth" 27 retweets and a reply that generated a response from the original author was worth 75 retweets. But X has now redacted information about how it's weighing these factors, saying that this information was excluded "for security reasons." The code also doesn't include any information about the data the algorithm was trained on, which could help researchers and others understand it or conduct audits. "One of the things I would really want to see is, what is the training data that they're using for this model," says Mohsen Foroughifar, an assistant professor of business technologies at Carnegie Mellon University. "if the data that is used for training this model is inherently biased, then the model might actually end up still being biased, regardless of what kind of things that you consider within the model." Being able to conduct research on the X recommendation algorithm would be extremely valuable, says Lazzaroni, who is working on anproject exploring alternative recommendation algorithms for social media platforms. Much of Lazzaroni's work involves simulating real-world social media platforms to test different approaches. But he says the code released by X doesn't have enough information to actually reproduce its recommendation algorithm. "We have the code to run the algorithm, but we don't have the model that you need to run the algorithm," he says.If researchers were able to study the X algorithm, it could yield insights that could impact more than just social media platforms. Many of the same questions and concerns that have been raised about how social media algorithms behave are likely to re-emerge in the context of AI chatbots."A lot of these challenges that we're seeing on social media platforms and the recommendation appear in a very similar way with these generative systems as well," Thickstun said. "So you can kind of extrapolate forward the kinds of challenges that we've seen with social media platforms to the kind of challenges that we'll see with interaction with GenAI platforms." Lazzaroni, who spends a lot of time simulating some of the most toxic behavior on social media, is even more blunt. "AI companies, to maximize profit, optimize the large language models for user engagement and not for telling the truth or caring about the mental health of the users. And this is the same exact problem: they make more profit, but the users get a worse society, or they get worse mental health out of it."

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

engadget /  🏆 276. in US

Social Media Companies John Thickstun Algorithm Ruggero Lazzaroni

 

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

What made the bystanders intervene and what can we learn from their actions?What made the bystanders intervene and what can we learn from their actions?The Elizabeth Smart case highlighted the importance of bystander intervention. Here are 4 factors which may have influenced their decision to act.
Read more »

What ARC Raiders Players Can Learn From Star Trek About TrustWhat ARC Raiders Players Can Learn From Star Trek About TrustARC Raiders players can learn a thing or two from Star Trek, where rules and negotiation turn trust into a tool for survival.
Read more »

Science challenges the idea that there’s one “right” age to learn a language.Science challenges the idea that there’s one “right” age to learn a language.From brain development to accents, aptitude, and motivation, the real science behind language learning challenges the idea that there’s only one “right” age to start.
Read more »

Utah 2034 Olympic and Paralympic Committee in Milan to learn from 2026 GamesUtah 2034 Olympic and Paralympic Committee in Milan to learn from 2026 GamesAlex Cabrero is an Emmy award-winning journalist and reporter for KSL since 2004. He covers various topics and events but particularly enjoys sharing stories that show what's good in the world.
Read more »

What Medicare can learn from Best Buy and WalmartWhat Medicare can learn from Best Buy and WalmartMedicare's drug payment plan is a good deal for seniors, but few know about it.
Read more »

California parents who decapitated 2 children, forced other kids to see bodies learn sentenceCalifornia parents who decapitated 2 children, forced other kids to see bodies learn sentenceFox News Channel offers its audiences in-depth news reporting, along with opinion and analysis encompassing the principles of free people, free markets and diversity of thought, as an alternative to the left-of-center offerings of the news marketplace.
Read more »



Render Time: 2026-04-01 15:07:09