Ireland wants input on disabling video-sharing platform algorithms

Ireland wants input on disabling video-sharing platform algorithms
Ireland's new commission for media regulation is seeking public feedback on their draft Online Safety Code
Ireland's new commission for media regulation is seeking public feedback on their draft Online Safety Code
View 1 Image
Ireland's new commission for media regulation is seeking public feedback on their draft Online Safety Code
Ireland's new commission for media regulation is seeking public feedback on their draft Online Safety Code

Coimisiún na Meán, Ireland’s new commission for media regulation, has invited public feedback on the country’s first Online Safety Code for video-sharing platform services. The draft Code includes a recommendation, amongst many others, that platforms consider incorporating a feature that turns off recommender algorithms based on user profiling by default.

Safety measures designed to make video-sharing platform service providers (VPSP) legally accountable for keeping people safe online, such as employing robust age verification technology and preventing the uploading or sharing of violent or hatred-inciting content, are to be expected. Perhaps less expected are Coimisiún na Meán’s (CnaM) recommendations regarding recommender algorithms. Given that tech titans like Google, Microsoft, Apple, TikTok, and Meta have made Ireland their headquarters for operations in the EU, the proposed changes could have a serious impact.

Recommender feeds, or recommender algorithms, draw on user data about preferences, prior searches or actions and other related data to recommend video content that may interest users. We’ve all been there: you click on one cat video because your friend said it was cute or funny, and before you know it, YouTube has recommended a hundred more just like it that you don’t want to watch.

But CnaM isn’t targeting cat videos; recommender systems can also amplify harmful content across platforms. Which is why the Commission, adopting a ‘Safety by Design’ approach, is asking VPSP to “as far [as] is practicable, take reasonable, proportionate and effective measures to reduce the risk of harm (in particular to children) caused by the manner in which recommender feeds aggregate and deliver content to users.” The bottom line is that providers must (the wording used in the Code) ensure that recommender algorithms don’t result in a user being exposed to harmful content.

But the obligations on VPSP don’t stop there. If the draft code was passed ‘as is’, they would have to report on actions taken regarding recommender algorithms to the Commission annually, “or at other intervals determined by the Commission”; so, basically, on request. And providers “shall prepare, publish and implement a recommender system safety plan that includes effective measures to mitigate risks that their recommender systems may cause harm.” In preparing said safety plan, the VPSP must consider, “at a minimum”, whether they include a feature allowing a user to reset any profiling algorithm “so that it functions as if the user was a new user” or a feature ensuring recommender algorithms based on profiling are turned off by default.

CnaM intends to separately consult on these matters, which stakeholders raised as matters of concern during the development of the Online Safety Code. They don’t intend to include algorithm change measures in the first Code.

But getting back to the cat video. In 2022, Mozilla researchers analyzed seven months of YouTube activity from over 20,000 participants and found that one rejected video spawned, on average, 115 bad recommendations that closely resembled the video users had told the platform they didn’t want to see.

Would you prefer not to have an algorithm provide ‘for you’ recommendations? Public feedback on all aspects of the Online Safety Code, including CnaM’s algorithm recommendations, is open until the 19th of January, 2024. The draft Code and consultation document can be found here (PDF).

Source: Coimisiún na Meán

Here's an Idea: Don't use platforms that do things you don't like.
Why does any "non tyrannical", non authoritarian government CARE?
Thank GOD America has the Declaration and Constitution, which declares Natural, God-Given, Rights precede Government power... and that it clarifies some of these Natural Rights, such as Freedom of Speech.
How about an algorithm scanner? AI generated images must have a different signature of pattern forms than naturally created images. I don't care so much about cat images, just robot cat images!
They act like using youtube or facebook is a birth right. You can just not use them. Ive cut back. The reality is, stuff like this just hurts the people starting out. They have facebook/snap/ticktock/google etc all signed up and agreed to most of their rules. The EU even forces them to go n moderation rampages on request. But if youre the next big thing coming through, a disruptive tech. Well you wont be able to code your project on your own, then code a whole system to monitor every single user for the govt, then put in a govt department so the govt can have direct access, then prepare reports about what youre doing. for the govt.... I mean its all just a tad silly isnt it?