>>12694>whatsapp >it's unironically a p2p video platform
interesting, i did not know that.
>for your scheme though, i would add virtual servers or SaaS where creators can upload their videos to the network without having to own the hardware. yeah, "you will own nothing and be happy," but most creatives are really happy to let the experts manage the technical part of their business
That would replicate the bottle-neck that makes current video-platfroms so uneconomical. Storage absolutely has to migrate to the edges of the network to fix that. I don't think it's too much to ask for a content creator to set up what is basically a fancy NAS. Many already do that to backup their video source files. Besides there's another "revenue streamerino" : tech support
. I specifically want dispersed ownership of hardware, not merely for ideological reasons of prioritizing personal property. All those data-centers look really fragile and precarious. Destroying a data-center with military weapons costs a 1000x times less than building one. I get it there are advantages to a consolidated cloud, but the current political leadership is insufficiently peace-minded for that.
>however, though we all would consider a lack of censorship as an advantage, you risk having the platform filled with bestiality and child porn and such illegal and disgusting content.
Oh bother, yeah i didn't really think about the toxic data sewage. I was primarily thinking about network efficiency, resiliency and muh-business-model that doesn't rely on adds.
>moderation is the real test of fire with social media. all the apps people used today are built on the backs of traumatized moderators who have to go through mountains of gore everyday to make the internet usable
There has to be a better way because the amount of content that can be created is increasing while the amount of humans moderating does not, so this scheme is living on borrowed time anyway.
We can probably also forget about using AI-moderation. There already are ways to tweak images to trick AIs to see something else than a human would. This AI-vision-interference stuff was invented because Artists wanted to prevent their images from being used to train AI. The fucking high-tech Luddites tried to invent a way to salt training data and ended up finding a way to hack AI vision. Since AI vision and human vision are so radically different, it's unlikely that the vision gap can be bridged, So you'll always be able to make an image where a human sees mentally traumatizing horror while the AI sees a cute kitten playing with a fish.
That leaves us with a scheme of users vouching for content quality. That should organically generate group-networks that vouch for specific types of (from their view-point) good content. That is quantifiable, and can be used to generate lists of content types that people can subscribe to. This can be combined with a type of content curator that seeks to find the good videos and then recommends them to people subscribed to that curator. That should fix the problem of traumatizing a bunch of people to keep the platform usable. Maybe it's better to focus on finding the good stuff and focusing all the attention on that, and then no attention goes toward the crud.
The data-sewage however would still be there, even it won't get any eyeballs. I don't really know what to do here, this feels like trying to build a road that only allows passage for virtuous people. That's too much of a moral burden for the construction worker that's operating the tarmac-machine. You can't really put a censorship mechanism into these things either because once you do that, you will recreate the social relations of organized religion, where you elevate a bunch of people to high priests that decide who gets excommunicated and what counts as heresy. Maybe the solution is creating a plurality of communities that each have their community-guide-lines, and then users join what ever communities they want, conceptually mimicking secular freedom of religion.