[ overboard / sfw / alt / cytube] [ leftypol / b / WRK / hobby / tech / edu / ga / ent / 777 / posad / i / a / R9K / dead ] [ meta ]

/tech/ - Technology

"Technology reveals the active relation of man to nature"
Name
Email
Subject
Comment
Flag
File
Embed
Password (For file deletion.)

Matrix   IRC Chat   Mumble   Telegram   Discord


File: 1677265486701.jpg ( 51.35 KB , 351x356 , Foss AI.jpg )

 No.11956

Recently there has been a lot of commotion around large language model text based AI.
They are able to do impressive stuff, they give useful answers, and even can write somewhat usable programming sample code.

The most famous one currently is chatgpt, but all of those AIs are basically black boxes, that probably have some malicious features under the hood.

While there are Open-Source Implementations of ChatGPT style Training Algorithms
https://www.infoq.com/news/2023/01/open-source-chatgpt/
Those kinda require that you have a sizeable gpu cluster like 500 $1k cards that are specialized kit, not your standard gaming stuff. To chew through large language-models with 100 billion to 500 billion parameters.

The biggest computational effort is the initial training run, that chews through a huge training data-set. After that is done, just running the thing to respond to your queries is easier.

So whats the path to a foss philosophy ethical AI ?
Should people do something like peer to peer network where they connect computers together to distribute the computational effort to many people ?

Or should people go for reducing the functionality until it can run on a normal computer ?
I guess the most useful feature is the computer-code generator, because you might be able to use that to make better Foss Ai software in the future, and help you with creating libre open source programs.

Is there another avenue for a Foss AI ?
>>

 No.11957

It's surprising to me that there's so much high-profile closed-source AI actually. AI research is primarily in the realm of academia right now, where I expect everything to be open source to foster a productive environment of scholarly peer review.
>>

 No.11958

>>11957
I don't have an answer for that either.
I kinda wonder how important open-training-data will become relative to the open source algorithms.
>>

 No.11960

File: 1677588471311.jpg ( 110.19 KB , 896x1062 , LLaMa by meta.jpg )

Check this out
https://invidious.snopyta.org/watch?v=gTkBUBJ9ksg

Meta of all companies is promising that they will make an open source AI that you can run on your own computer.

Did the Zuck really go from "Dumb fucks giving me all their private data" to "lets Democratize AI"
Is there a catch ?
>>

 No.11962

>>11960
Probably a Free-as-in-Free-Labor license with obfuscated code.
>>

 No.12036

>>11960
https://www.hackster.io/news/the-llama-is-out-of-the-bag-17993515b310

<The fun did not stop with the MacBook Pro. Other engineers got LLaMA running on Windows machines, a Pixel 6 smartphone, and even a Raspberry Pi. Granted, it runs very slowly on the Raspberry Pi 4, but considering that even a few weeks ago it would have been unthinkable that a GPT-3-class LLM would be running locally on such hardware, it is still a very impressive hack.


This seems like something worth while getting into.
Does anybody have a handle on this ?
>>

 No.12037

File: 1678996558308.jpg ( 53.19 KB , 446x444 , lama out the bag.jpg )

>>12036
darn it forgot the picture
>>

 No.12063

>>11956
>So whats the path to a foss philosophy ethical AI ?
The source code being open is not enough. The dataset the AI is trained on must also be open, and it must be possible to verify the trained AI against that dataset. It is entirely possible to hide undetectable backdoors inside machine learning agents.
https://www.quantamagazine.org/cryptographers-show-how-to-hide-invisible-backdoors-in-ai-20230302/

>Should people do something like peer to peer network where they connect computers together to distribute the computational effort to many people ?

There's a folding@home style thing for AI called "petals", which turns a distributed network into an AI. The problem is, there's currently no way of verifying if a node is fabricating its computations or injecting fraudulent results, and it's not clear at all if verifying such a thing is even possible (short of instituting homomorphic encryption over the top of the network)
https://petals.ml/

>Or should people go for reducing the functionality until it can run on a normal computer ?

Basically all the new capabilities of these chatGPT-style AIs stem not from new algorithms, but throwing masses of computing power at old algorithms through brute force. So running it on a home desktop machine is not going to accomplish much.

>I guess the most useful feature is the computer-code generator, because you might be able to use that to make better Foss Ai software in the future, and help you with creating libre open source programs.

I strongly advise against using code generators to contribute to open-source software as the code they emit is likely copyright infringement (distributing significant sections of the source without the associated license attached), even though it hasn't been tested in court yet.
https://githubcopilotlitigation.com/
>>

 No.12065

>>12063
>It is entirely possible to hide undetectable backdoors inside machine learning agents.
Your article (which was a very interesting read b.t.w.) basically says that's possible at the moment because quantifying AI is still in it's infancy.

>There's a folding@home style thing for AI called "petals", which turns a distributed network into an AI.

that sounds promising
>The problem is, there's currently no way of verifying if a node is fabricating its computations or injecting fraudulent results
Well there's the brute force method of sending the same computation tasks to multiple nodes and comparing the results.

>Basically all the new capabilities of these chatGPT-style AIs stem not from new algorithms, but throwing masses of computing power at old algorithms through brute force. So running it on a home desktop machine is not going to accomplish much.

Well there apparently there are "GPT-3-class" large language models that run on a single computer as stated in >>12036
It seems to me that they managed greater efficiency by pruning the parameters, as in having fewer but better quality.
I'm not sure i really understood that correctly.

>I strongly advise against using code generators to contribute to open-source software

Yeah at the moment that's probably good advice, but lets assume that once we got a clean FOSS-AI that doesn't produce license issues, that'll likely become a boon for free and open source software.
>>

 No.12071

File: 1680820738327.jpg ( 35.26 KB , 940x687 , gpt4all.jpg )

Here is a chat gpt style Ai that should run on a reasonably modern desktop or labtop

https://github.com/nomic-ai/gpt4all
>>

 No.12084

File: 1681560962457.jpg ( 45.98 KB , 1000x737 , crystal ball.jpg )

I have an new AI prediction

AI is going to be abused for nefarious purposes like scamming, and it's also going to be used to attack powerful entrenched interests. That will cause a lot of people to want it regulated to death. Which will produce a lot of litigation directed at AI developers. Additional legal attacks will come from legal trolls that just want to extract payouts from AI companies.

AI developers will be confronted with a lot of legal costs, that's the point where they start creating AI legal tools.

Think about what lawyers do, they read a lot of legal texts, and then in legal proceedings they generate a lot of legal documents. That's a text-input/text-output function. While legalese is very hard to understand from a human perspective, it is also much more precise and consistent than normal spoken language, which will make it much easier for AIs to gain very high proficiency very quickly. Legal procedures are slow enough for several generations of AI-lawyers to be develloped. It might even be possible to create case-specific AIs. The attempt of strangling AI in the crib with legal means will fail hard.

Expressed in neoclassic terms: The AI-devs will eat lawyers for lunch. There was a similar battle between file-sharing-devs and lawyers, that battle arguably ended either in a draw or a partial victory for the lawyers.

Expressed in structuralist terms: This will be industrialized law wiping out artisanal law.

Once everybody needs AI-lawyers, there's not going to be much chance of effective AI-regulation, and humanity will be lorded over by a juristic AI god until somebody kills it with an emp. This is definitely the dark time-line where civilization goes through a hard crash, but the juristic AI god will also purge the capitalist ruling class, because all the current capitalists have broken so many laws. While this will be a brutal dystopia, you'll get at least a little bit of surplus enjoyment.

If we want the nice AI-future, we should push for democratization of AI. Meaning that every individual person gets their own AI-assistent, developed as a free open source software, that runs on user-hardware that is fully controlled by the individual user, that will produce the ethical outcomes we want and it will kill off 99% of the spammers too, because Ai-spam-filters will also work for AI-spam.
>>

 No.12369

File: 1690816920921.jpg ( 75.19 KB , 743x532 , alllamas.jpg )

>>11960
>>11962
>>12036
>>12037
Zuck has released llama2 and this time it comes with a licensee that forbids using its outputs for improving other AIs.

I wonder if that is enforceable ?
>>

 No.12504

>humanity will be lorded over by a juristic AI god
Reminder that lords only have the power we give to them, it is impossible to control millions/billions of people with brute force.
Stop believing some dipshit has the right to rule you just because some other dipshits made a bunch of X's on paper or pressed some button in a booth.
>>

 No.12546

Is there an open source AI detector? Can we classify posts written by AI or not?
>>

 No.12576

File: 1697455291199.png ( 4.68 MB , 1276x3412 , thefinders.png )

>>12546
>Can we classify posts written by AI or not?
tl;dr: no

For now, advanced captcha methods should suffice. There are some interesting examples on the login pages of darknet markets (anyone know one that isn't a scam, btw?)

In the near future:
Currently all major chatbots are developed by corporations pushing some form of NWO agenda so if you suspect someone to be a bot, ask about the Finders cult, plandemics, GMO, 9/11, anthropogenic global warming, etc.
If anon shares the Wikipedia point of view it's either a shill, a bot, or someone who never learned to do his own research and thus can be considered a 'bot' in the Matrix4 sense of the word.

Also you can trick chatbots into hallucinating things, like recommending books that don't exist (Corbett had an example of this).
>>

 No.12582

>>12576
>Currently all major chatbots are developed by corporations pushing some form of agenda
Wait a minute that means they're automating the shills ?
They'll all loose their jobs ? That's cold.

Unique IPs: 13

[Return][Catalog][Top][Home][Post a Reply]
Delete Post [ ]
[ overboard / sfw / alt / cytube] [ leftypol / b / WRK / hobby / tech / edu / ga / ent / 777 / posad / i / a / R9K / dead ] [ meta ]
ReturnCatalogTopBottomHome