[ overboard / sfw / alt / cytube] [ leftypol / b / WRK / hobby / tech / edu / ga / ent / 777 / posad / i / a / R9K / dead ] [ meta ]

/tech/ - Technology

"Technology reveals the active relation of man to nature"
Name
Email
Subject
Comment
Flag
File
Embed
Password (For file deletion.)

Matrix

IRC Chat

Pleroma

Mumble

Telegram

Discord



File: 1677265486701.jpg (51.35 KB, 351x356, Foss AI.jpg)

 No.11956

Recently there has been a lot of commotion around large language model text based AI.
They are able to do impressive stuff, they give useful answers, and even can write somewhat usable programming sample code.

The most famous one currently is chatgpt, but all of those AIs are basically black boxes, that probably have some malicious features under the hood.

While there are Open-Source Implementations of ChatGPT style Training Algorithms
https://www.infoq.com/news/2023/01/open-source-chatgpt/
Those kinda require that you have a sizeable gpu cluster like 500 $1k cards that are specialized kit, not your standard gaming stuff. To chew through large language-models with 100 billion to 500 billion parameters.

The biggest computational effort is the initial training run, that chews through a huge training data-set. After that is done, just running the thing to respond to your queries is easier.

So whats the path to a foss philosophy ethical AI ?
Should people do something like peer to peer network where they connect computers together to distribute the computational effort to many people ?

Or should people go for reducing the functionality until it can run on a normal computer ?
I guess the most useful feature is the computer-code generator, because you might be able to use that to make better Foss Ai software in the future, and help you with creating libre open source programs.

Is there another avenue for a Foss AI ?
>>

 No.11957

It's surprising to me that there's so much high-profile closed-source AI actually. AI research is primarily in the realm of academia right now, where I expect everything to be open source to foster a productive environment of scholarly peer review.
>>

 No.11958

>>11957
I don't have an answer for that either.
I kinda wonder how important open-training-data will become relative to the open source algorithms.
>>

 No.11960

File: 1677588471311.jpg (110.19 KB, 896x1062, LLaMa by meta.jpg)

Check this out
https://invidious.snopyta.org/watch?v=gTkBUBJ9ksg

Meta of all companies is promising that they will make an open source AI that you can run on your own computer.

Did the Zuck really go from "Dumb fucks giving me all their private data" to "lets Democratize AI"
Is there a catch ?
>>

 No.11962

>>11960
Probably a Free-as-in-Free-Labor license with obfuscated code.
>>

 No.12036

>>11960
https://www.hackster.io/news/the-llama-is-out-of-the-bag-17993515b310

<The fun did not stop with the MacBook Pro. Other engineers got LLaMA running on Windows machines, a Pixel 6 smartphone, and even a Raspberry Pi. Granted, it runs very slowly on the Raspberry Pi 4, but considering that even a few weeks ago it would have been unthinkable that a GPT-3-class LLM would be running locally on such hardware, it is still a very impressive hack.


This seems like something worth while getting into.
Does anybody have a handle on this ?
>>

 No.12037

File: 1678996558308.jpg (53.19 KB, 446x444, lama out the bag.jpg)

>>12036
darn it forgot the picture

Unique IPs: 5

[Return][Go to top] [Catalog] | [Home][Post a Reply]
Delete Post [ ]
[ overboard / sfw / alt / cytube] [ leftypol / b / WRK / hobby / tech / edu / ga / ent / 777 / posad / i / a / R9K / dead ] [ meta ]