Ireland wants input on disabling video-sharing platform algorithms
Coimisiún na Meán, Ireland’s new commission for media regulation, has invited public feedback on the country’s first Online Safety Code for video-sharing platform services. The draft Code includes a recommendation, amongst many others, that platforms consider incorporating a feature that turns off recommender algorithms based on user profiling by default.
Safety measures designed to make video-sharing platform service providers (VPSP) legally accountable for keeping people safe online, such as employing robust age verification technology and preventing the uploading or sharing of violent or hatred-inciting content, are to be expected. Perhaps less expected are Coimisiún na Meán’s (CnaM) recommendations regarding recommender algorithms. Given that tech titans like Google, Microsoft, Apple, TikTok, and Meta have made Ireland their headquarters for operations in the EU, the proposed changes could have a serious impact.
Recommender feeds, or recommender algorithms, draw on user data about preferences, prior searches or actions and other related data to recommend video content that may interest users. We’ve all been there: you click on one cat video because your friend said it was cute or funny, and before you know it, YouTube has recommended a hundred more just like it that you don’t want to watch.
But CnaM isn’t targeting cat videos; recommender systems can also amplify harmful content across platforms. Which is why the Commission, adopting a ‘Safety by Design’ approach, is asking VPSP to “as far [as] is practicable, take reasonable, proportionate and effective measures to reduce the risk of harm (in particular to children) caused by the manner in which recommender feeds aggregate and deliver content to users.” The bottom line is that providers must (the wording used in the Code) ensure that recommender algorithms don’t result in a user being exposed to harmful content.
But the obligations on VPSP don’t stop there. If the draft code was passed ‘as is’, they would have to report on actions taken regarding recommender algorithms to the Commission annually, “or at other intervals determined by the Commission”; so, basically, on request. And providers “shall prepare, publish and implement a recommender system safety plan that includes effective measures to mitigate risks that their recommender systems may cause harm.” In preparing said safety plan, the VPSP must consider, “at a minimum”, whether they include a feature allowing a user to reset any profiling algorithm “so that it functions as if the user was a new user” or a feature ensuring recommender algorithms based on profiling are turned off by default.
CnaM intends to separately consult on these matters, which stakeholders raised as matters of concern during the development of the Online Safety Code. They don’t intend to include algorithm change measures in the first Code.
But getting back to the cat video. In 2022, Mozilla researchers analyzed seven months of YouTube activity from over 20,000 participants and found that one rejected video spawned, on average, 115 bad recommendations that closely resembled the video users had told the platform they didn’t want to see.
Would you prefer not to have an algorithm provide ‘for you’ recommendations? Public feedback on all aspects of the Online Safety Code, including CnaM’s algorithm recommendations, is open until the 19th of January, 2024. The draft Code and consultation document can be found here (PDF).
Source: Coimisiún na Meán