Hello everyone,
As some of you may be aware, the EARN IT act is attacking end-to-end encryption head on, and is something that cannot happen. In short, as this isn't the topic of this post, if Ukraine vs Russia was closer to home and we could no longer trust our government, the EARN IT Act would be destroy any fighting chance any opposition had.
With that said, there exists a huge problem with Social Media. Scams and Child Solicitation are just a few.
I am hoping to reach out to someone at Norton+LifeLock to get my idea across, as I need their lobbying powers to help make it happen.
If you haven't used Reddit or Discord, then you won't be familiar with these in a positive context, however I propose that we require social media to implement Bots. These are not spam bots and would not be for sending messages or automating responses to receiving them, necessarily.
The idea behind the bots would be to moderate incoming and outgoing messages. The reason I am reaching out to Norton+LifeLock's community is because I feel that Norton would be a great company to make one of these bots for it's users. The bot would be able to scan outgoing messages and block messages that may share too much personal information to a party that is not known. In addition, it would be able to read incoming messages using AI/Machine Learning and determine if it was a scam. The majority of the scams on Facebook seem to relate to Cashapp, or some government lists to receive money. Although it is probably different in different areas. LifeLock would provide these bots to protect it's users from these scams, since they specialize in this kind of protection and it affects your identity being compromised. I also trust Norton with my personal information and messages far more so than I would Facebook.
So how does it work?
Well, end to end encryption prevents messages from being read on servers at Facebook. Any kind of machine learning on Facebook's end gives them that ability and it could be abused for law enforcement way too easily. If we divide it up into smaller companies that you choose to add to your account as Bots, then you have a choice in who can moderate your messages. Ideally, self-hosted applications that you can run yourself would also be able to moderate messages if you did not want to trust a business with it.
It works by calling what is called a webhook on every message received and message sent. A webhook points to a web address, such as https://mymoderator.com/account/DanielWeisinger/moderate-api/send. When a message is being sent, it calls upon this URL with the message and contact information of all parties involved, and returns a status code saying to either send the message or reject it for whatever reason.
In addition, say you have https://mymoderator.com/account/DanielWeisinger/moderate-api/receive being called for receiving messages. mymoderator.com would notice there was an attached image and per settings configured at mymoderator.com, it would scan it for nudity. If nudity existed, the image is rejected and the message is received without it. Due to the nature of a webhooks and APIs, you would also have the option to blur the nudity and send the message on. This of course would be reserved for a more premium account as the processing costs are higher to do so.
The negative side effect is that it would take slightly longer to receive messages depending on what kind of moderation was taking place. However, the benefit of no longer receiving scam messages far surpasses the negatives if you ask me.
The goal with this ability is to shift the moderation and detection of child porn, bullying, and other illegal content to the parents and away from fortune 500's and the government, which is not allowed to automate the FBI's job. It would be too easy to swat someone, even easier than it is now.
Disclaimer: I run a service that provides an API to blur faces/nudity in media. I would be able to benefit from this by building a bot to ensure that any media shared by or received by a social media profile did not contain a face or nudity.