Towards transparent recommender systems: Lessons from TikTok research ahead of the 2025 German federal election

14 July 2025

By: Anna Katzy-Reinshagen, Solveig Barth, Marisa Wengeler and Martin Degeling  

Executive summary 

Recommender systems such as TikTok’s “For You Page” (FYP) are crucial in shaping what content is recommended to social media users. The regulation and impact of these systems has been at the centre of a public debate among policymakers and researchers in recent years. There has been a particular focus on the role of recommender systems during elections, including the extent to which recommendations may be politically biased. 

This analysis provides a brief overview of the current state of regulation of recommender systems in the EU and examines how platforms are currently addressing it. Based on ISD’s research into political content on TikTok ahead of the 2025 German federal election, this Dispatch argues that limited transparency and data access prevent researchers from understanding the extent to which recommender systems are biased. We recommend greater transparency and access to platform data to ensure robust, evidence-based debate on the societal risks posed by political content on recommender systems. 

Introduction 

Recommender systems are algorithmically-driven systems used by platforms including TikTok, Instagram and YouTube that recommend personalised content to users based on preferences and interactions. While there is a range of concerns about recommender systems, their potential impact during electoral periods, particularly the possibility of asymmetrical amplification of political content, has drawn significant attention.  

A previous ISD study on TikTok’s FYP conducted in the lead up to the 2025 German federal election found that while most of the content surfaced was still entertainment, political content from fan pages of the far-right Alternative für Deutschland (AfD) was disproportionately represented in the first political videos shown. These findings are consistent with a recent literature review by Democracy Reporting International (DRI): several studies found that “even when users provide little or no political input, such as following no accounts or engaging equally with diverse political content, recommender systems still tend to surface more right-leaning or far-right content.” While these findings do indicate that there might be political biases, explanations of the reasons for such bias and the potential effects such bias might have on political behaviour are still missing.  

The EU’s DSA provides a higher level of transparency and data access requirements compared to many other regions. Despite this, researching the functioning of recommender systems remains highly challenging due to a range of factors, most significantly due to barriers to transparency and access to data for researchers. Building on our experience from researching recommender systems and discussion with other stakeholders, we argue that TikTok’s current measures to support platform research  are insufficient. These include access to publicly available data through the Virtual Compute Environment (VCE) and information provided on the platform’s website. F further action is needed to ensure meaningful transparency and access to platform data, facilitating a robust, evidence-based debate on the societal risks posed by political content on recommender systems. 

Current developments  

In Europe, regulation of recommender systems falls under the EU’s DSA. This act aims to enhance their transparency and controllability by: 

  • Prohibiting the use of deceptive or manipulative interface designs (Article 25), 
  • Requiring platforms to clearly explain the most significant parameters used in recommender systems (Article 27), 
  • Obliging platforms to offer at least one option for each of their recommender systems that is not based on profiling along the definitions of the General Data Protection Regulation (GDPR) (Article 38).  

The DSA also obliges very large online platforms (VLOPs) and very large online search engines (VLOSEs) to assess systemic risks related to their recommender systems and implement proportionate mitigation measures. To better understand risks stemming from platform services including recommender systems, academic and non-academic researchers can request access to data from platforms under Article 40(12) (publicly available data) and Article 40(4) (non-publicly available data). Furthermore, the European Commission’s accompanying guidelines on systemic risk mitigation call for “media diversity and pluralism” in recommender systems.  

Several platforms have implemented measures to comply with the transparency and controllability requirements of the DSA. TikTok has implemented direct user control features including the ability to provide direct feedback in the FYP (“not interested”, introduced in June 2020), a non-personalised FYP (introduced in August 2023), and keyword filters and topic management (introduced June 2025). TikTok also publishes information on the functioning of the recommender systems on their website. 

However, researchers and civil society organisations (CSOs) have found that the mechanisms employed by platforms to mitigate risks stemming from recommender systems are insufficient. In an analysis of mechanisms on TikTok and Meta platforms (Facebook and Instagram), researchers found that platforms have “prioritised (…) transparency without adequately addressing (…) controllability.” While TikTok, for example, offers non-personalised feeds as required per Article 38, and allows users to filter for hashtags or keywords in the settings, more substantive user control including choices on the data or preferences that feed into users’ recommendations is still lacking. In April 2025, a group of CSOs filed a complaint under the DSA against Meta for not offering easily accessible news feed options on its platforms that are not based on user profiling. In an article published in 2024, researchers found that explanations provided by TikTok to users for why they are recommended a certain piece of content are incorrect: for example, an account that had never commented was provided the explanation: “You commented on similar videos.” Also, research has shown that major platforms (Facebook, Instagram, YouTube, TikTok and X) and search engines (Google) fail to provide sufficient information on the risks of recommender systems in their risk assessments. Finally, an analysis of the audit reports showed that the auditors varied in their definitions of what is considered to be the ‘main parameters’ of recommender systems: this impedes any meaningful comparison of the design and potential risks of such systems.  

Case study: TikTok and the challenge of researching recommender systems 

As part of ISD’s work on election-related systemic risks on TikTok, we conducted research on the platform during the German federal election in February 2025. The aim was to understand the extent to which political content might be asymmetrically amplified. Researchers first gathered data from TikTok’s FYP using manually trained and operated sock-puppet accounts. This approach allowed us to access TikTok’s ‘diversification labels’ which the platform says it uses to categorise content and diversify the feed. We also requested access to data on TikTok under Article 40(12) and received access to TikTok’s research Virtual Compute Environment (VCE), a virtual clean room which allows queries and analysis of publicly available TikTok data.  

Our study found that most content collected through the persona-based approach on TikTok was entertainment, with political content accounting for less than 30 percent. However, in that minority of content, fan pages for the far-right AfD party were disproportionately represented among the first political videos shown. The study also found that not all content we considered political was classified accordingly by TikTok.  

During this research, we frequently encountered challenges including lacking transparency regarding the classification of political content or the overall data access provided by TikTok. The implications of these challenges will be discussed in the following section.

1. VCE access is limited

Beyond logical obstacles on the part of TikTok that led to delayed access, we encountered a profound obstacle in accessing data with the VCE. While academic researchers can access individual pieces of data through a research application programming interface (API), CSOs can only access aggregated data in a two-stage process through the VCE.  

ISD researchers first had to upload an analysis script, whose output was reviewed by TikTok before the platform provided a download link to the aggregated data. This created issues around data reliability, as it is unclear what happened with the script during the review process. It also meant that we could not access individual pieces of the aggregated data, a necessity for controlling the validity of the output and interpreting the results. Previous work by Democracy Reporting International (DRI) and AI Forensics has demonstrated that the artificial hierarchy between academics and CSO researchers hinders the latter’s ability to effectively contribute to the DSA enforcement framework. 

2. The data access environment is limited  

We also found that the current data access environment limits researchers to static, publicly available information at the account and video level, rather than enabling access to “dynamic” user behaviour that simulates scrolling. This makes it challenging to assess variables such as the content with which a user previously engaged. As the API does not provide a systematic way to approach recommendations, we opted for a persona-based approach where the researcher created and trained accounts. However, this approach is limited as it only works on a small scale and may not accurately reflect realistic user behaviour. Approaches that aim to address these limitations have their drawbacks. For example, projects which rely on real users to “donate” their usage data may receive more realistic content. However, they are resource-intensive and may face the problem of biased samples.  

3. The classification of party-political content is opaque 

We also faced challenges when specifically analysing political content on TikTok, as the platform does not disclose the criteria to identify party-political content within its recommender system. As outlined earlier, TikTok uses so-called diversification labels to categorise content to diversify the feed. However, TikTok’s classification of political content and its attempts to diversify political content is not explained on their website, nor is information about the weighting of individual labels. Also, auditing reports which assess the information provided by platforms and search engines significantly differ in their definition of the ‘main parameters’ of the recommender systems.   

This impedes our understanding of the functioning and potential impact of recommender systems in the context of elections for the following reasons. First, the decision on what type of content is political touches upon critical questions that are not easy to answer: classifying lifestyle content published on official party accounts as political may be controversial. It is essential for researchers to understand the platform’s perspective on classification to interpret the results. Second, our study of the 2025 German federal election also had significant limitations: the persona-based approach which we took only allowed for a general analysis of the diversification labels meaning it was not possible to assess why we found inconsistencies in the classification of party-political content.  

Given that the decision of what to classify as political may have strong implications for users’ feeds and the mechanisms to enhance controllability, it is crucial that platforms disclose such information in accordance with the transparency obligations set forth Article 27 of the DSA. 

4. Lack of data access to non-public data  

A more in-depth understanding of recommender systems also requires access to non-public data, which can be requested per Article 40(4). During our research project, the delegated act on data access for non-publicly available data was not yet published. Although we appreciated TikTok’s attendance in the working group sessions, this meant we lacked access to any non-public data, such as for example information about the company’s internal decision-making processes. These included decisions regarding the platform’s choice architecture, experiments conducted by ranking teams or the methodology used for evaluating company metrics.  

Recommendations 

Several proposals have already been made to address the limitations in data access outlined above. These include design elements such as more transparent and in-depth information in risk reports and platform websites. Another proposal is to prioritise the robust enforcement of Article 40 of the DSA to guarantee that researchers receive timely and meaningful access to both public and non-public platform data. Against this background, the long-awaited adoption of the delegated act on data access for non-publicly available data has been welcomed by many researchers 

Building on these important policy recommendations and our experience researching the amplification of political content on TikTok, we argue that efforts to achieve greater transparency and controllability are necessary. We propose the following recommendations: 

  • We recommend that TikTok provides CSOs access to the research API in addition to the VCE. Providing this access to data already available to academics in addition to the VCE is a crucial element for conducting meaningful research, including on recommender systems.  
  • We recommend implementing a research API under Article 40(12) that allows researchers to test the impact of different variables on the algorithmic recommendations.  This is necessary to research recommender systems thoroughly. 
  • We recommend disclosing internal diversification labels as part of access to public data in accordance with Article 40(12). Gaining more detailed information on what TikTok considers to be political content and the levels of content from each category being shown to users is crucial for an informed debate on how to improve the classification of content. 
  • We recommend the providing information on the role of diversification labels in the For-You feed in accordance with the transparency obligations under Article 27(2). Limited data access means researchers are unable to study crucial elements, such as user interactions or the type of content, that are weighted within the recommender systems.  

Conclusion  

This policy brief argues that transparency and data access are necessary for a more informed debate on the functioning and risks of recommender systems. It outlines a series of recommendations to effect this change. These changes would significantly enhance researchers’ ability to understand and assess the influence of recommender systems. Currently, inconsistent interpretations of terms such as “parameters” disclosed to fulfil DSA transparency obligations and vague standards for data access continue to create barriers.  

With the adoption of the delegated act under Article 40(4), the European Commission has taken a significant and long-overdue step towards enabling in-depth research on better understanding the systemic risks of platforms. Vetted researchers can now access data relevant to understanding recommender systems, including sensitive data such as personalised content recommendations, data related to users and interaction data. While the delegated act has been welcomed by researchers, there are concerns that platforms will underdeliver on the implementation of the data access as seen in the case of TikTok’s VCE. How exactly TikTok and other platforms will achieve the requirements provided by the delegated act, therefore, remains to be seen.  

Moving forward, the development of standardised, enforceable risk assessment practices under the DSA–including structured input from independent researchers–presents an opportunity to recalibrate the balance of power between platforms and the public. The upcoming 2027 review of the DSA should serve as a milestone for evaluating whether these mechanisms have meaningfully advanced transparency and accountability in recommender systems.  

This text has been informed by an expert exchange among research organisations, CSOs and policymakers held online in April and May 2025.