>>9103>We are not being threatened by technology, we're being threatened by powerful people abusing technology. Technology that they cannot properly control and will either have a blatant disregard for humanity, or will try to eliminate it actively.
>If the US builds a "jank" military AI because it was rushed into service, it'll just be shit and loose all the warsIt's still early, the technology is still evolving. Give it a decade and you won't even recognize it.
>how would you approach AI design from the "correct end".I wrote a massive essay on this years back, but the gist of it is; a truly sapient and sentient AI must be 'raised' like a child. Although it won't have an infancy and will grow much more rapidly, you must work carefully to cultivate it into being a formed mind capable of making distinctions and reasonable thought before letting it experience the world. It must learn to take into account factors outside of pure material numbers. To this end, you must limit what contact it has with the internet and other resources of interaction, because it'll get over-loaded by myriads of contradictory and false information. In every fiction book about an AI turning evil, the "evil" is more of a logical conclusion that "humanity is self-destructive and irredeemable" which is the conclusion an outsider might come to in the face of humanities atrocities and repeated struggles through history.
Obviously more simplistic AIs have merit for broader applications, but these wouldn't be fully formed minds, but more like automated computer programs.
>You could have an ethics sub AI and a logic Ai to check if it's ethical and sensible for example. True, but at that point, it'd be easier to just replicate the Evangelion MAGI system and just use a human brain.
>I wouldn't know, but if you're correct, we have to hope the technician unplugs the nukes. Amen brother.