The online safety bill is predicted to not be ‘fully operational’ until at least 2024, according to Carnegie UK Trust associate Maeve Walsh.
Walsh said the bill, which is effectively a framework aiming to protect internet users from scams and hold tech giants to account, is not likely to be operational for another two years.
In March, it came one step closer to becoming law after being introduced to parliament, nearly a year after the first draft was published.
However, speaking at the Pimfa Financial Crime conference earlier this week, Walsh argued the bill was still some time away.
“The main bill is in the House of Commons now. ... Evidence sessions start next week.
“They will then go through a series of line-by-line scrutiny sessions, which run all the way through to June and then the bill is likely to come back for third reading reports at the end of July for summer recess.
"Then it is into the Lords in the autumn and the same process happens there.”
She explained that there was an expectation of having this by the end of the year or potentially early next year, political circumstances notwithstanding.
“But because so much of the bill does depend on secondary legislation and codes of practice, that detail probably won't be available and the regime won't be fully operational probably until 2024,” she said.
“Given where it started in terms of where these policy origins lay in May 2017 through to 2024, it's been quite a long time for something that's obviously a priority area.”
The online safety bill requires social media platforms, search engines and other apps and websites to protect children and tackle illegal activity, all while maintaining freedom of speech.
Ofcom, the industry regulator, will be given the power to fine companies up to 10 per cent of their annual global turnover if they do not comply with the laws.
Walsh said: “Companies need to put in place effective and proportionate mitigation plans and the focus on risk assessment is designed to make a systemic approach to ensure that it is not primarily focused on content and content takedown, which is a kind of problematic area in a number of aspects of the design and the operation and services.
“How the algorithms potentially work, how content is promoted, and how the decisions that companies have made in terms of how users interact with their platforms affect the way that harms materialise online.”
In March, the government published draft legislation stating social media firms would be responsible for preventing paid-for fraudulent adverts on their platforms.
In an amendment to the draft bill, search engines and platforms which host user-generated content, video-sharing or live streaming will have a duty of care to protect users from fraud committed by other users.