EU AI regulation dropped
Civil libertiesThere are some protections in there that somewhat limit it's use for things like biometrics scraping and predictive-policing, but given the propensity for AI to hallucinate, make shit up and confidently assert pure nonsense as objective truth, i would have expected a moratorium for anything related to police work, with periodic re-evaluations in case somebody managed to fix the hallucinations.
There is a ban on using it to manipulate people, which sounds good but i don't know the specifics.
copyrideNo ban for using IP-shackled materials but a requirement to declare the use of ip-shackled stuff. Not sure where this will go. The sticky point here is going to be that the IP-mafia is looking to extract rent from AI companies, and if the law is any good it'll prevent that. The goal should be to allow the AI to learn from anything and use what it learned to generate new works, but not let it pass off the works of others as it's own, so no license stripping, but also no IP-rent-seeking.
There seems to be an opt-out clause so that people who are granted the special title of """copyrightholder""" can say that a AI isn't allowed to look at certain materials. I sort of understand why some people may find this reasonable, because they imagine granting rights to small artists to defy big-tech. But in the medium term i see a legal risk that the copyright bullshit might get extended to human brains if the difference between learning done by meat-brains and machine-learning sufficiently decreases. And in the long term it means that artificial machine people, or biological people with AI-implants will no longer have freedom of thought.
As a side-note Iran has abolished all ip-shackles, so if the AI companies all of a sudden begin setting up shop in Iran, that probably is the signal that a war between the IP-mafia and AI-companies has broken out. Japan also has very broad exemptions for AI that insulates them from IP-lawfare.
risk level<Under the proposals, AI tools will be classified according to their perceived risk level: from minimal through to limited, high, and unacceptable. Areas of concern could include biometric surveillance, spreading misinformation or discriminatory language.Not sure what this means, because i couldn't really find the criteria for the risk levels, just that it depends on the area of use, but since this contains the censorship word """misinformation""" that probably means AI will lie about politics and push ruling ideology propaganda talking points.
In the long run i want a personal assistant type AI, that runs on my computer where i can configure the philosophy, ideology, media biases, and so on however i want, not sure if this interferes with that or not.
open sourceit seems that the regulations for commercial AI is not being foisted onto open source projects, so cautious optimism on opensource stuff not getting fucked. Tho there might be a issue with what is called
foundational models those might come with such a high compliance burden that small open source projects or small companies are prevented from participation entirely.
https://openfuture.eu/blog/undermining-the-foundation-of-open-source-ai/ (might be out of date since it's from may)
pic not related other than being ai generated