[ overboard / sfw / alt / cytube] [ leftypol / b / WRK / hobby / tech / edu / ga / ent / 777 / posad / i / a / R9K / dead ] [ meta ]

/tech/ - Technology

"Technology reveals the active relation of man to nature"
Name
Email
Subject
Comment
Flag
File
Embed
Password (For file deletion.)

Matrix   IRC Chat   Mumble   Telegram   Discord


File: 1702175229078.png ( 23.4 KB , 350x524 , snooty.png )

 No.12770

EU AI regulation dropped

Civil liberties
There are some protections in there that somewhat limit it's use for things like biometrics scraping and predictive-policing, but given the propensity for AI to hallucinate, make shit up and confidently assert pure nonsense as objective truth, i would have expected a moratorium for anything related to police work, with periodic re-evaluations in case somebody managed to fix the hallucinations.

There is a ban on using it to manipulate people, which sounds good but i don't know the specifics.

copyride
No ban for using IP-shackled materials but a requirement to declare the use of ip-shackled stuff. Not sure where this will go. The sticky point here is going to be that the IP-mafia is looking to extract rent from AI companies, and if the law is any good it'll prevent that. The goal should be to allow the AI to learn from anything and use what it learned to generate new works, but not let it pass off the works of others as it's own, so no license stripping, but also no IP-rent-seeking.

There seems to be an opt-out clause so that people who are granted the special title of """copyrightholder""" can say that a AI isn't allowed to look at certain materials. I sort of understand why some people may find this reasonable, because they imagine granting rights to small artists to defy big-tech. But in the medium term i see a legal risk that the copyright bullshit might get extended to human brains if the difference between learning done by meat-brains and machine-learning sufficiently decreases. And in the long term it means that artificial machine people, or biological people with AI-implants will no longer have freedom of thought.

As a side-note Iran has abolished all ip-shackles, so if the AI companies all of a sudden begin setting up shop in Iran, that probably is the signal that a war between the IP-mafia and AI-companies has broken out. Japan also has very broad exemptions for AI that insulates them from IP-lawfare.

risk level
<Under the proposals, AI tools will be classified according to their perceived risk level: from minimal through to limited, high, and unacceptable. Areas of concern could include biometric surveillance, spreading misinformation or discriminatory language.
Not sure what this means, because i couldn't really find the criteria for the risk levels, just that it depends on the area of use, but since this contains the censorship word """misinformation""" that probably means AI will lie about politics and push ruling ideology propaganda talking points.

In the long run i want a personal assistant type AI, that runs on my computer where i can configure the philosophy, ideology, media biases, and so on however i want, not sure if this interferes with that or not.

open source
it seems that the regulations for commercial AI is not being foisted onto open source projects, so cautious optimism on opensource stuff not getting fucked. Tho there might be a issue with what is called foundational models those might come with such a high compliance burden that small open source projects or small companies are prevented from participation entirely. https://openfuture.eu/blog/undermining-the-foundation-of-open-source-ai/ (might be out of date since it's from may)

pic not related other than being ai generated
>>

 No.12771

File: 1702188976646.jpg ( 42.59 KB , 640x629 , 20231208_174225.jpg )

>>12770
Oh look, there goes Euros. Making themselves irrelevant nobodies in the name of safety again.
>>

 No.12772

>>12771
I think that's not an entirely fair assessment.

The reason the US has big-tech companies and Europe doesn't, has little to do with regulations.

The US developed these technologies in the public sector with public funding and handed them over to silicon valley, where big finance pumped huge sums of money into those tech companies so they could grow faster than anything else doing regular market stuff, like re-investing profits of the previous quarter to grow a little bit next quarter.

The question is what do you want from AI. Do you want one giant mega-AI that does everything ? if so the US method will give you that.

I find the big tech platforms frustrating to use. They're sort of alright if you have no idea and just want to follow their template. But if you know what you want, it gets really complicated and difficult.

To me a few hundred small AIs that are specialized to a narrower field sounds a lot more appealing. I think at the moment this can only exist as small open source projects that are funded by donations from individuals, public grants and private companies that use the tools they make. Regular small businesses that make a proprietary tech thing tend to get gobbled up by massive corporations that mostly just ruin and then ditch the tech thing. I'm not sure if these EU regulations will deliver that or not, i'm kinda waiting for policy experts to analyze that.

Another thing is that there are 2 stages to the current AI-race. The first one is figuring out these Ai models, once these crystalize into known quantities and are fully optimized software. There will be a second race to build the most efficient hardware-stack to run them. I'm guessing that will need lots of different accelerators, so that might favor open architectures like RISKV where it's easy to mix and match.
>>

 No.12773

>>12772
uygha, you really need to work on brevity

Unique IPs: 4

[Return][Catalog][Top][Home][Post a Reply]
Delete Post [ ]
[ overboard / sfw / alt / cytube] [ leftypol / b / WRK / hobby / tech / edu / ga / ent / 777 / posad / i / a / R9K / dead ] [ meta ]
ReturnCatalogTopBottomHome