After Microsoft unleashed an AI on Twitter in March only to see it turn quickly into a racist hate-bot, you'd think companies would be a little warier turning content curation over to computers.

But Facebook did just that at the end of last week, laying off the team which managed its "Trending" algorithm - which displays topics and news which are currently popular. Within 2 days of leaving the algorithm to its own devices, it had selected a range of untrue and unsuitable stories as the latest "Trends".

All kinds of questions arise from this sort of activity - not least of which is whether organisations like Facebook have a duty to make sure that news they distribute isn't plainly false. An increasing number of people (especially "millenials") get their news from social media, where it's either curated by their friend groups or by the social media site itself, as in this case. In this instance, Facebook seems to be more motivated by ensuring neutral "Trending" topics than ensuring factual accuracy - possibly motivated by claims that its "Trending" team has been failing to promote Republican or right-wing causes during the US Presidential Election campaign.

Those are tough questions and there are no easy answers, though it's clear that anyone curating media content should be up-front about what they're doing. Is it a no-guarantees, computer-generated promotion of whatever the internet hands over? Or is it a human-led process which might suffer political or personal bias?

But there's also an easy lesson from Facebook's misfortune: getting rid of your human staff before you've sent your technological replacement on a solo run might be inadvisable.