No.5962[Reply]
>The research paper in question deals with possible ethical issues of large language models, a field of research being pursued by OpenAI, Google and others. Gebru said she doesn’t know why Google had concerns about the paper, which she said was approved by her manager and submitted to others at Google for comment.
>The paper called out the dangers of using large language models to train algorithms that could, for example, write tweets, answer trivia and translate poetry, according to a copy of the document. The models are essentially trained by analyzing language from the internet, which doesn’t reflect large swaths of the global population not yet online, according to the paper. Gebru highlights the risk that the models will only reflect the worldview of people who have been privileged enough to be a part of the training data.
Kind of a weird thing to get fired over, especially when it's your damn job to do exactly this.
11 posts omitted. Click reply to view.>>
No.6002
>>5994That wasn't it at all. Reading comprehension: F, see me after class.
What she is saying is that AI is being trained using language data from the internet. Only the wealthiest people on the planet have access to the internet, therefore AI will be trained by speech/text patterns of the wealthiest humans. And she's right. Last thing we need is an imperialist, anti-communist AIs.
>>
No.6007
>>6002Also, people don't act on the internet like they act in real life.
>>
No.6008
>>6002is it tho? considering social network is modern equivalent to crack cocaine psyop, I'd think researchers would have access to rather diverse data including poorer part of population no?
I might actually read her paper if it's openaccess
>>
No.6011
>>5962>Do this or I quit>We accept your resignationHow the fuck was she fired?