virtual padlock

The Hurdles of a Safer Internet

2017 has been, shall we say, tumultuous. Climates political, economical and technological have all fallen under intense scrutiny as a host of new problems have reared their heads. With so many of our day-to-day lives linked to or dictated by our increasing connectivity, it’s impossible to ignore the fact that social media, online retailers, and even government watchdogs are having to consider some pretty drastic changes to safeguard our internet.

Perhaps the most controversial is the UK’s Investigatory Powers Act. Considered by many to be the most extreme surveillance regime in Western Europe, this legalises a number of methods for the government and selected authorities to monitor the public’s use of the internet – which includes forcing ISPs to store users’ web-browsing history for 12 months. Whilst the bill is intended to fight online crime and extremism, the argument exists that giving up our liberties is to fall victim to the very whims of those who want to oppress us. Given that our data has already been compromised in attacks such as the recent TalkTalk hacks, the idea that our personal lives are no longer personal has understandably been met with fierce opposition.

light beams of data in a dark box

joshua-sortino-LqKhnDzSF-8-unsplash

Only days after the act became law, the European Court of Justice ruled this bulk data collection as illegal, and a recent tribunal between the UK’s spy agencies and privacy advocates has since seen the case elevated further. Both parties have agreed that this is a matter for the Grand Chamber, and thus, the debate continues – albeit more amicably than previously.

Still, the threat of ‘Fake News’, extremist content and other internet undesirables continues, and with connectivity now just as much our daily lives as electricity and running water, concerns over our vulnerability continue.  The question is, where does the responsibility lie?

According to Theresa May, it lays with the tech companies. In a recent UN assembly, she asserted that technology giants have a responsibility to go further and faster when combatting terrorist content online. It likely comes from a very personal and alarming place for the Prime Minister, following reports that the UK is the biggest target audience for ISIS propaganda. Her demands include new technology that can prevent extremist material from appearing online – a pretty naïve request –  or being able to take it down within 2 hours: a slightly more reasonable one.

Her demands might be unrealistic, but they do demonstrate how current methods and algorithms are failing. Facebook’s advertising algorithm has recently learned how to target hate groups, whilst Amazon has been known to steer users toward bomb-making materials via their “frequently bought together” feature. Algorithms designed specifically to combat hate speech or propaganda, meanwhile, turn up false positives regularly; an everyday BBC article about ISIS could be targeted and removed, while the real culprits simply learn more secretive methods and hide in more secure spaces.

These are inconsistencies that coders can’t always predict, but they demonstrate that no matter how well-developed algorithms are, they can’t make the logical or ethical decisions of a human. Combined efforts, however, have seen companies such as Facebook, Twitter and Google combining the metadata of known suspicious content in what they call the Global Internet Forum. As this pool of data grows, so too does their combined knowledge of criminal methods; and with Twitter boasting 75% of extremist accounts removed before a single Tweet is sent, the combined efforts are encouraging.

virtual padlock

image by Peter Linforth from Pixabay

The recent UN meetings have seen Britain, France and Italy commit to this “2 hour” limit for internet extremism, with governments prepared to take legislative action against internet companies who don’t measure up to their standards. Companies, reacting to this hard-line stance, are now researching artificial intelligence more advanced than any current measures.

It’s all very promising stuff, but one can’t help but wonder if governments are placing a little too much blame in the hands of the tech companies. As the Global Internet Forum has demonstrated, combined efforts are seeing promising statistics; but it’s surprising that tech companies stress the difficulties of coding efficient protection when their marketing algorithms are so tightly designed. It might be that if either side can drop the demands and the excuses, their combined forces will make for diligent, democratic solutions.