[ overboard / sfw / alt / cytube] [ leftypol / b / WRK / hobby / tech / edu / ga / ent / music / 777 / posad / i / a / R9K / dead ] [ meta ]

/tech/ - Technology

"Technology reveals the active relation of man to nature"
Name
Email
Subject
Comment
Captcha
Tor Only

Flag
File
Embed
Password (For file deletion.)

Matrix   IRC Chat   Mumble   Telegram   Discord


File: 1718476344866.png ( 26.9 KB , 897x589 , nsaGPT.png )

 No.13069

So OpenClosedAi has appointed a NSA-guy as director.
So it's probably best to avoid that one.
>>

 No.13070

File: 1718476940528.jpg ( 111.89 KB , 900x900 , 1560981406447.jpg )

Oh nooo, they get to read all my conversations about overthrowing western capitalism I had with it… what a shame kek.

It's kinda too bad that llama3 70b is a bit hard to run even with a decent machine, and the 8b version isn't nearly as great at answering coding related questions.
>>

 No.13071

File: 1718477172244.png ( 8.19 KB , 272x186 , rage.png )

NOOOOOOOOO!!!
>>

 No.13072

>>13070
>overthrowing
It's more like the surveillance extremists are trying to overthrow democratic governance.
>>

 No.13073

>>13072
There was never any democracy in the states, but yes more surveillance powers should be concerning to everyone.

Also the algorithmic censorship.

I actually have no idea how someone growing up in America is supposed to de-program themselves from the propaganda they're subjected to since birth. All sources of information are controlled and altered. Between a real event and you hearing about it is a man-in-the-middle altering the footage. I suppose you have to face some intense grievances like poverty to even begin to change your mind.
>>

 No.13074

>>13073
>There was never any democracy in the states
Traditional bourgeois democracy tends to be very plutocratic. But the democracy-meter wasn't zero.
>surveillance powers should be concerning to everyone.
Pretty much everyone opposes it, when they aren't being gaslight by invalid arguments.

>de-program themselves from the propagand

People getting screwed by the system because poverty is inflicted on them, that does tend to break the propaganda hold. But there are other ways. In general the ideological propaganda is incongruent and if somebody thinks about it enough, it falls apart. When the official narrative contradicts lived experience that sets off that type of thinking. As they increase the gap between objective reality and propaganda fiction, these collisions between narrative and experience become more frequent, and hence more people snap out of it.
>>

 No.13195

What the crap is going on in the last couple of weeks? Many big brains involved with all this are breaking cover, saying AGI is here, and joining AI safety efforts? I don't mean OpenAi - professors that never used twitter have opened an account and are sounding the alarm.
>>

 No.13196

>>13195
Can you give a few examples ? I thought we were a little bit in a slump as far as the AI-hype was concerned.

>saying AGI is here, and joining AI safety efforts?

Call me a cynic but that sounds more like coordinated fearmongering to push an agenda.

As a general rule of thumb stirring up fear is usually about taking away controle from you. Are they trying to take AI-tools away from people ? Is that what the "safety efforting" is trying to do ?

Btw AGI is one of those buzzwords that doesn't mean anything anymore. The original definition referred to something quite advanced that only exists in science fiction. Something that is one or two centuries away in reality.

>professors that never used twitter have opened an account

so it could be bots for all we know.

The first smart-person prediction of the impending AI explosion was in the 1960s by Herbet Simmon
<"Machines will be capable, within 20 years of doing anything a man can do"
And we all remember the thinking machines taking over in the 80s. s/
>>

 No.13197

File: 1726379204716-0.png ( 280.59 KB , 1748x1005 , oh.png )

File: 1726379204716-1.png ( 1.9 MB , 1518x9401 , damn.png )

>>13196
>so it could be bots for all we know
come on guy. verified accounts.
>>

 No.13198

>>13196
the skepticism is warranted btw. Maybe it's some kind of hype train to push an agenda. If the credible voices are to be believed the timelines have contracted, thoughever. A lot. I'm not talking about sentience or whatever but something powerful enough to be transformative. Educated people with seemingly diverse agendas seem to be saying that something might finally happen and much sooner than expected.
>t. kurzweil mocker
>>

 No.13200

>>13197
So Ai research ?

I guess that's somewhat plausible, that the current machine learning systems would be able to dig up useful stuff from the existing scientific body of knowledge, that was overlooked.

That is however a finite pool of discoveries. Which means that the greater speed at which machine learning systems find interesting correlations in the scientific data will become somewhat bottle-necked by the rate of discovery.

My guess is that it would have to be a specialized Ai for that purpose. Not a multipurpose Ai like the one mentioned in your twitter pic.
>>

 No.13202

>>13198
>I'm not talking about sentience or whatever but something powerful enough to be transformative.
AI has the potential to be transformative. However if that potential gets realized that is another question.

In all of history transformative technological change, had dramatic effects on societal power structures. Agrarian societies were dominated by feudal lords, but when industrial technology began transforming society, the bourgeoisie squeezed the feudal lords out of power.

At the moment most AI is concentrated in the same old big corporate structures that have been powerful for the last century or two. So it ain't looking good.

It's not possible to have transformational technological change without also changing the power-structures in society. If the corporate overlords win the power struggle they will maintain the status quo by killing off anything that brings change, because the current system is where they are at the top, and if it changes they won't be at the top.

For example this post >>13197 talks about Ai doing medical research on cancer. If this is not just hype-fluff and there are advances that lead to significant automation for the entire research-to-treatment-pipeline, that means curing cancer will be changed from an expensive involved process to a simple and cheap treatment. That means that a giant chunk of the pharma-industry gets wiped out. Even more general the pharmaceutical industry is extracting most of the money from rent seeking based on intellectual monopolies related to drugs and treatments. That probably goes out of the window in the "AI-doctor" paradigm, the signs for transformative change happening, will be when those intellectual monopoly rent seeking structures start falling down.

my guess is that free open source Ais becoming something that is run as a personal assistant by individual people, on devices they can entirely controle. As well as an explosion of many different smaller organization doing lots of specific things, that'll be the structure where we get transformational change.

Consider that it's possible to invent a technology and squander the potential. Like what happened to the technical application of nuclear physics. In the latter half of the 20th century, We could have build 5000 massive nuclear power plants all over the world. We would have gotten abundant electricity at such low cost that personal use could have been made free of charge. We could have had transformative change. But we build 10s of thousands of nukes instead, and nuclear is now a scary word.

The same thing could happen to Ai, it could get weaponized, cause lots of destruction and death then it becomes the scary thing, that people avoid for half a century or more. Israel already has build a kind of AI-Hitler that is deciding which Palestinians get killed. They might set off something like the Butlerian Jihad from the novel Dune

Nuclear power generation is making a comeback, because the Chinese and the Indians need lots of power and they have lots of thorium mineral extraction by-products that can be used as nuclear fuel, but consider that they are basically continuing now where the US stopped in the 1970s, that's half a century delay. We might fuck up AI the same way and then it goes into the box for many many decades.
>>

 No.13205

>>13197
<In today's episode: scientoids discover their database parrot being more sociologically adept (toddler-tier) than its balding 4-handed mammalian creators. Watch the cucks seethe now!

>>13202
Jasss, kamerad! Finally we have the perfect nazi who just executes the best possible orders on its own volition! Socialism by 2088!
>>

 No.13206

>>13205
>Finally we have the perfect nazi who just executes the best possible orders on its own volition! Socialism by 2088!
Did you take drugs before writing this deranged comment ?
>>

 No.13208

File: 1726512486229.jpg ( 46.73 KB , 685x879 , Ai torture against citizen….jpg )

>>13202
>We might fuck up AI
Yup probably

<Larry Ellison says AI will enable a vast surveillance system that can monitor citizens.

<Ellison, the billionaire cofounder of Oracle, shared his thoughts on AI during a recent meeting.
<Walking down a suburban neighborhood street already feels like a Ring doorbell panopticon.
<But this is only the start of our surveillance dystopia, according to Larry Ellison, the billionaire cofounder of Oracle. He said AI will usher in a new era of surveillance that he gleefully said will ensure "citizens will be on their best behavior."

https://12ft.io/https://www.businessinsider.in/tech/news/billionaire-larry-ellison-says-a-vast-ai-fueled-surveillance-system-can-ensure-citizens-will-be-on-their-best-behavior/articleshow/113373120.cms

What he is proposing is essentially behavior modification via psychological torture.
Technology as a whip In this case surveillance-stalker-wares and AI-stalker-wares.

It'll go beyond what people can tolerate and cause the 3 conflict reactions (flight,freeze and attack). People will figure out how fragile all the digital support infrastructure is and it'll get destroyed. It took over 3 decades to build and it'll be torn down in a fit of rage, lasting maybe a week. All the digital niceties will probably be lost too. And it will take forever to recover it. It'll be back to square one.

The only good that'll come of it will be the most aggressive privacy laws imaginable, and the funky privacy fashion trends. We'll likely get some useful political concepts out of it like information power, and the creation of a new personal right that says individuals retain information-dominance over their personal lives at all times. And some new torture prohibitions that ban a hole slew of psychological behavioral modification techniques. Somewhat likely is a surveillance-inversion as in that only power that is transparent will be considered legitimate.

I just wish we could skip the hole digital barbarism part.
>>

 No.13209

>>13208
>It took over 3 decades to build and it'll be torn down in a fit of rage, lasting maybe a week

Hopefully, but I'm not that optimistic. I think if it's done slowly enough people will hold the L and accept it.
>>

 No.13210

>>13209
I don't think the tear-down scenario is particularly hopeful. Consider all the things that'll be lost. The hopeful scenario is where the digital infrastructure is corrected in order to serve the interests of the people.

As far as the surveillance-aggressors are concerned, their end goal is a device attached to your neck (and everybody else's) that can punish by electroshock, chemical-pain-injections, and ultimately death by blowing up a small charge Zionist pager style. I don't know how to categorize this, regression to serfdom, probably.

If you are still asking your self how surveillance leads to exploding neck-devices (which would be odd considering recent events), it's the logical conclusion of trying to controle people via fear.

I'm not making this up, ten years ago i read an article by a consulting-guy for super-rich people from roughly the same milieu as this Ellison guy, maybe we could name it the SIC (surveillance industrial complex). He said that he got requests for technological control-bracelets to put around people's necks in order to ensure reliable controle over people for "end of the world emergency scenarios"

They'll keep pushing unless they are stopped. So if you harbor any illusions for a boring dystopia that might just be bearable enough, keep in mind they're building the Torment Nexus, intentionally so.

Also the Normies aren't taking the L they're just pursuing a different strategy than privacy conscious techies. They think about the surveillance machine like a reputation/public-image management problem. They're going to push for the ability to curate their "surveillance profile", and they'll be able to do this to a great extend. And possibly also to ruin the profiles of others (1) . Privacy conscious techies are opposed to observation because they are builders, who don't want bullies to take what they made from them. Bullies don't take what they can't see. The paniopticon probably comes with a heavy economic penalty, because it demoralizes many builders.
(1) the surveillance ideologues who think that the surveillance data will be a record of reality are delusional

The reason why i think the tear-down scenario is almost certainly going to be the outcome, is because this won't just be a 2 sided conflict between surveillance-enslavers and the privacy-deprived-serfs. This is an instrument for controle and there will be lots of different factions struggling to wield it. Eventually the struggle gets fierce enough and then it enters the physical realm. All that stuff is super fragile, it'll get destroyed very quickly. it'll be like a tipping point and then whoosh.

All the surveillance comes with opportunity costs regarding effective safety. We are neglecting to create lots of fancy sensors that do stuff like scanning the water/food/air supply for all sorts of contamination. and pagers for zionsplosives. The surveillance ideologues have manufactured an erroneous idea about safety, based on "catching the bad guys". However the correct safety-strategy is to guard the inputs and prevent hazardous ingress. Because that's something objectively measurable, while "bad-guy-ness" is not.
>>

 No.13220

>>13208
>Corpo glowie casually proposes Skynet dystopia
>Soydevs rejoice

If such a thing even gets considered seriously, I'm gonna kill myself
>>

 No.13221

>>13220
If only you knew how bad things were.
A major application for AI is going to be for Bioinformatics.
Normies were stupid enough to give large chunks of their genomes to 23&Me, who then used AI to fill in the blanks for people missing from the global family tree. All this shit got "leaked"; what will it be used for?

Unique IPs: 12

[Return][Catalog][Top][Home][Post a Reply]
Delete Post [ ]
[ overboard / sfw / alt / cytube] [ leftypol / b / WRK / hobby / tech / edu / ga / ent / music / 777 / posad / i / a / R9K / dead ] [ meta ]
ReturnCatalogTopBottomHome