The moment social media entered our lives the world changed forever. Myspace and Bebo (what a throwback) unlocked the ability for people to interact and communicate with others from the other side of the street to the other side of the world, 24 hours a day. The way brands would be able to interact and market their products would be utterly transformed. Given how it is now omnipresent in our lives, social media safety is now a pressing concern.
This digital universe provided a new platform for friends, family and even relationships to grow and develop, something which the whole world relied on during the recent pandemic. However, despite a great deal of positives, social media has come to harbour a great deal of malicious and harmful behaviours over the years. The phrase ‘keyboard warrior’ might spring to mind at this point. Although some of us may not have experienced it first-hand I am sure we all know of someone who has had to deal with unpleasant comments on a post or photo. So, what are social media companies doing to protect their users?
Instagram have stepped up their game by introducing a new moderator option for Instagram live streams. As we know, nearly all live-streaming options have experienced their fair share of concerning incidents. One particularly high-profile case of live stream harassment on Instagram saw Pakistani actress Hania Amir left in tears after she was sexually harassed during an IG Live session. Social media safety in the form of live stream moderation is inherently challenging, because it’s happening in real time, and some use this to their advantage, knowing that they can get away with harassment, harmful and offensive behaviour, especially on larger streams.
The addition of live mods, which Instagram has been developing over the past few months, will help and provide some additional security for IG Live streams, while also making it easier to manage your broadcasts.
YouTube have ramped up their safety procedures by expanding its warning on potentially offensive comments. Back in 2020, YouTube launched new comment warnings in the mobile app which are displayed whenever YouTube detects a potentially offensive response, giving users a chance to review their comment before posting. Now, the same alerts are being expanded to desktop as well (well played YouTube). The alert will prompt the user to reconsider their comment, while also providing a link to YouTube’s Community Guidelines for more context on what could potentially violate the rules. The aim of this is to encourage respectful on-platform behaviour, while protecting both creators and viewers from offensive material.
Next up, the Metaverse. Most of us are all waiting in anticipation for brands to gradually morph their products into a virtual setting (Ready Player One is about to launch) but with most things in life, there are pros and cons to the newest technological advancements. This digital space may become the ideal breeding ground for negative and abusive behaviour to grow. So how will this be dealt with? Many companies are hopeful that individuals and communities will stand together to fight against this negativity. For example, if a brand wishes to launch their own NFT they will create a discord channel to promote and build communities. If that community grows unhappy with a certain aspect of an NFT, word or even name this can be flagged and collectively dealt with.
Smart contracts may harness an alternative. These are programs that are stored in the blockchain that only activate under certain conditions. If the conditions of the smart contract are violated, the program fails to run. If brands are to build rules around decorum and behaviour into a mandatory smart contract before selling someone an NFT, or allowing them entrance into a virtual space, they may be able to keep themselves safe. Despite the hype, the Metaverse is technology that is still very much in its infancy, and its ability to gain wider adoption could be hurt as a broader reckoning around digital privacy takes place.
TikTok has certainly not escaped the social media safety spotlight for younger followers resulting in a US Attorney General launching a probe into the dangers of TikTok for younger users. The investigation will examine how TikTok entices young users, as well as the content it displays, and how those factors can influence behaviour and response – and whether TikTok knowingly puts youngsters at risk through its recommendation systems.
In a statement TikTok said: “We care deeply about building an experience that helps to protect and support the well-being of our community and appreciate that the state attorneys general are focusing on the safety of younger users. We look forward to providing information on the many safety and privacy protections we have for teens.”
As we continue to explore the digital world, it looks like brands may need to consider more inventive ways of ensuring these virtual platforms remain safe and secure as the world prepares for more technological advances.
Want to learn more about how social media can be used in your marketing strategy? Why not give us a call or pop into our office and we will be more than happy to delve into more of our expert knowledge around social media.