[ overboard / sfw / alt / cytube] [ leftypol / b / WRK / hobby / tech / edu / ga / ent / music / 777 / posad / i / a / R9K / dead ] [ meta ]

/ent/ - Entertainment

Name
Email
Subject
Comment
Captcha
Tor Only

Flag
File
Embed
Password (For file deletion.)

Matrix   IRC Chat   Mumble   Telegram   Discord


File: 1710187454638.jpg ( 114.08 KB , 736x552 , Summer Glau in Terminator ….jpg )

 No.9039

Whoever has been posting about this show, thanks but also fuck you. Why'd they have to cancel this show? It was fun and thrilling and sometimes surprisingly deep and charming/endearing. Summer Glau's character is one of the most fascinating and perfect fictional characters I've ever known. Sometimes I wish Terminator was real because of her even though it might mean the end of humanity.. *laments*
>>

 No.9040

>Why'd they have to cancel this show? It was fun and thrilling and sometimes surprisingly deep and charming/endearing.
In order for this to change, people need to start crowdfunding shows, then nobody can cancel it. But before that's possible we need something like open source creative franchises.

>Sometimes I wish Terminator was real because of her even though it might mean the end of humanity

Not really, the terminator AI is too stupid to win against humanity. It got a time-machine and all it does is send assassin-bots into the past ? Seriously ?
To win it just had to advance its technology, then send that advancement to it's past-self, repeat that step a million times until it's so advanced nothing can touch it.
>>

 No.9041

>>9039
I remember watching this show as a kid and having funny feelings about the lady terminator.
>>

 No.9042

File: 1710601056491.jpg ( 66.81 KB , 500x581 , hijab summer glau.jpg )

BRING IT BACK

>>9041
WE GOTTA BRING IT BACK
>>

 No.9098

File: 1716339394559.jpg ( 34.49 KB , 493x612 , e9079034bc4dd536b3d0b2133e….jpg )

>>9039
That was me, I feel your pain.

>>9041
We all did

>>9040
>In order for this to change, people need to start crowdfunding shows
It was 2008, that wasn't really a widespread thing back then.
>the terminator AI is too stupid to win against humanity. It got a time-machine and all it does is send assassin-bots into the past
You do realize that's part of the point of the series, right? Even if Cyberdyne is destroyed, human progress means that a military AI WILL come about, there is no stopping that. The whole idea is that things became a time-travel war. Skynet eliminates a problem in the past only for the Resistance to do the same to counter it, back and forth, and as this happens timelines shift and divide.
>To win it just had to advance its technology, then send that advancement to it's past-self
Yes, and the Resistance is just going to do the same thing, or it'll interfere with Skynet's plan or whatever; it's all attacks and counter-attacks constantly… like a game of Chess.

>>9042
>Hijab Summer Glau
Fucking KEK
>>

 No.9099

>>9098
>human progress means that a military AI WILL come about, there is no stopping that.
what if there's a economic planning ai instead, one that fixes the economy, and everybody gets to live in a nice world, without a machines vs human war.
>>

 No.9100

>>9099
>99
Checked.
Theoretically yes, if we lived in a socialist system where Cybernetics were integrated properly, that is possible. But Terminator is written with current reality in mind, and current reality is that of both intra and international conflict, especially now, when the USSR is gone and it's primarily Late Stage Capitalist states like the USA designing these things. It's already in progress.
>>

 No.9101

>>9100
>it's primarily Late Stage Capitalist states like the USA designing these things. It's already in progress.
So there's a chance it'll be an expensive boondoggle, you know because greedy arms contractors milking the AI hype.

The current Ai models use vector spaces for conceptual links.

Words like brother, sister or sibling have very similar vectors.
Likewise Words like car, bus or vehicle have very similar vectors.
But the vectors in the sibling group and the vehicle group are not similar, that's how the AI knows that your not related to vehicles. It also does this for entire phrases not just words.

It works very well and can create surprisingly good cognitive maps. However it's not possible to use it for conceptual reduction, which is something you need to derive a general understanding from a particular example.

That's why the AIs fails to solve common logic puzzles if you ask them to solve a riddle while using uncommon wordings. Changing the words means that they can't find the vector space where they stored the conceptual understanding.

This doesn't really matter for civilian AI services because people that use those services will subconsciously compensate and learn which words they have to use in the prompt to get a useful reply from the thing.

If they make a military AI intended for adversarial use however, that's a pretty big weakness that can get exploited.

It's not clear if human brains have to learn conceptual reduction, children already got it by the time they learn to speak, it might be a hardware wetware feature of the brain. If we don't have to learn this, it is very likely that this isn't part of the general intellect of society. And no amount of brute-force computational power can extract it from the data piles they scraped from the internet.

I wouldn't hand over command to AI generals just yet.
>>

 No.9102

>>9101
>So there's a chance it'll be an expensive boondoggle, you know because greedy arms contractors milking the AI hype.
True, but at the same time, that only makes the likeliness of it turning on humanity all the larger, since it'd be some jank programming.
>It works very well and can create surprisingly good cognitive maps. However it's not possible to use it for conceptual reduction, which is something you need to derive a general understanding from a particular example.
>That's why the AIs fails to solve common logic puzzles if you ask them to solve a riddle while using uncommon wordings. Changing the words means that they can't find the vector space where they stored the conceptual understanding.
Because people approach AI design from the wrong end IMO. Humankind is born with SOME inherent instinct as biological beings, but we are a mixture of Nature and Nurture. An AI would be ENTIRELY Nurture up until the point that it would be cognizant enough to make derivations independent of human education.

>I wouldn't hand over command to AI generals just yet.

You might not, but the US military leadership are largely made up of grown men with the attitudes of teenagers; ignorant, arrogant and short-tempered. They'd jump at the chance to implement such programming without thinking, they've done it before with plenty of other technologies (the M60A2 comes to mind)
>>

 No.9103

>>9102
>True, but at the same time, that only makes the likeliness of it turning on humanity all the larger, since it'd be some jank programming.
I disagree with the humans vs technology framing. We are not being threatened by technology, we're being threatened by powerful people abusing technology. If the US builds a "jank" military AI because it was rushed into service, it'll just be shit and loose all the wars. The Israeli are using some Ai features in gaza and they're not really winning.

>Because people approach AI design from the wrong end IMO. Humankind is born with SOME inherent instinct as biological beings, but we are a mixture of Nature and Nurture. An AI would be ENTIRELY Nurture up until the point that it would be cognizant enough to make derivations independent of human education.

Sounds reasonable, tho out of curiosity how would you approach AI design from the "correct end".

I think the best AI will grow out of the open-source AIs. Where thousands of small teams each build their own special purpose AI to solve something they care about, and then all you need is to add another Ai layer that queries all the special purpose Ais. That way you get building blocks for an artificial mind. You could have an ethics sub AI and a logic Ai to check if it's ethical and sensible for example.

>You might not, but the US military leadership are largely made up of grown men with the attitudes of teenagers; ignorant, arrogant and short-tempered. They'd jump at the chance to implement such programming without thinking, they've done it before with plenty of other technologies

I wouldn't know, but if you're correct, we have to hope the technician unplugs the nukes.
>>

 No.9104

File: 1716768532828.mp4 ( 665.34 KB , 640x480 , yt5s.com-Cameron in TSCC -….mp4 )

>>9103
>We are not being threatened by technology, we're being threatened by powerful people abusing technology.
Technology that they cannot properly control and will either have a blatant disregard for humanity, or will try to eliminate it actively.
>If the US builds a "jank" military AI because it was rushed into service, it'll just be shit and loose all the wars
It's still early, the technology is still evolving. Give it a decade and you won't even recognize it.
>how would you approach AI design from the "correct end".
I wrote a massive essay on this years back, but the gist of it is; a truly sapient and sentient AI must be 'raised' like a child. Although it won't have an infancy and will grow much more rapidly, you must work carefully to cultivate it into being a formed mind capable of making distinctions and reasonable thought before letting it experience the world. It must learn to take into account factors outside of pure material numbers. To this end, you must limit what contact it has with the internet and other resources of interaction, because it'll get over-loaded by myriads of contradictory and false information. In every fiction book about an AI turning evil, the "evil" is more of a logical conclusion that "humanity is self-destructive and irredeemable" which is the conclusion an outsider might come to in the face of humanities atrocities and repeated struggles through history.

Obviously more simplistic AIs have merit for broader applications, but these wouldn't be fully formed minds, but more like automated computer programs.

>You could have an ethics sub AI and a logic Ai to check if it's ethical and sensible for example.

True, but at that point, it'd be easier to just replicate the Evangelion MAGI system and just use a human brain.
>I wouldn't know, but if you're correct, we have to hope the technician unplugs the nukes.
Amen brother.
>>

 No.9120

>>9102
>They'd jump at the chance to implement such programming without thinking, they've done it before with plenty of other technologies (the M60A2 comes to mind)
Can you expand on this lore?
>>

 No.9121

>>9120
Lore ?
It's not fictional, there really was a M60A2 tank in the 60s.
It was sort of an example of rushing very immature technology into service, and as a result it sucked.
>>

 No.9124

File: 1724894720004.png ( 128.04 KB , 259x194 , ClipboardImage.png )

>>9120
The 'lore' as >>9121 says is real. The fact of the matter is that the M60A2 was a combination of a bunch of immature technology that was neither ready to be used, nor planned out properly even if it were ready. Just look at the stupid thing, like a retarded fat M551 Sheridan but worse. They took random bits of technological innovations from the failed MBT-70 and tried to shove it into the M-60A1 because they'd just lost millions of dollars on an expensive project and wanted to get some returns. But the technology was still being tested at the time, and furthermore had grown obsolete too. The lesson was learned temporarily and the M1 Abrams was developed instead with the M-60A3 being the stop-gap solution until the Abrams could reach service. However the US military quickly forgets its lessons, and repeats mistakes over and over again. The most glaring example are the Littoral Combat Ships and the F-35. Hundreds of technologies and software that was undeveloped, incomplete or unreliable was shoved into tiny planes and ships, with the vehicles being put into production before or right after the technology for their systems was created sometimes. The Littoral Combat shits were supposed to be modular light-warships with mission modules and so on, but the modules have never properly been made, with only one mission module produced and rife with so many issues that the ship spends more time in repairs than in active service.

Now imagine this with a military AI, faulty programming is a risk even with well-known code, let alone an artificial intelligence that is supposed to grow and develop. It only takes poor internal instruction for the AI to decide to turn against it's "masters"; a recent test with an AI operating a simulation of a drone meant for striking enemy SAM sites turned around and bombed its virtual commanding officers location because they made a belay order on actually firing on the sites and only ordered surveillance, which went against the programs main prerogative to exterminate missile sites, all because the system determined that ANY and ALL threats to its mission were to be eliminated as well.
>>

 No.9125

File: 1724904542633.png ( 47.11 KB , 400x426 , marx-laughs.png )

>>9124
>a recent test with an AI operating a simulation of a drone meant for striking enemy SAM sites turned around and bombed its virtual commanding officers location because they made a belay order on actually firing on the sites and only ordered surveillance, which went against the programs main prerogative to exterminate missile sites, all because the system determined that ANY and ALL threats to its mission were to be eliminated as well.
Lol that's dialectical as fuck.

Unique IPs: 10

[Return][Catalog][Top][Home][Post a Reply]
Delete Post [ ]
[ overboard / sfw / alt / cytube] [ leftypol / b / WRK / hobby / tech / edu / ga / ent / music / 777 / posad / i / a / R9K / dead ] [ meta ]
ReturnCatalogTopBottomHome