You are not logged in.
Butlerian jihad <- I had to look that up.
Seems right ![]()
Debian 12 Beardog, SoxDog and still a Conky 1.9er
Offline
Use AI as a tool i.e. search-engine, a type of bash-completion, spitting out code frameworks etc. = Absolutely
Having it on my own machine/OS = No way dude.But dang, "Butlerian jihad" seems a little extreme, lol, made my wife giggle when she read that as she's a huge fan of the novels.
I'm an old sci-fi guy who has read far too many dystopian stories about godlike intelligent computers enslaving humanity, super computers gone rogue, Tyrell Corp, Cyberdyne Systems, Capsule Corp, etc. I already have a healthy skepticism towards giving computers too much power and control.
Now, take that skepticism and sprinkle a generous disdain for cryptobros and broligarchs like Altman, Musk, Zuckerberg, and Bezos, and I am very very wary of any artificial intelligence invented by or promoted by these kinds of sociopaths. They don't care about possible long-term consequences of their actions. They are only interested in short-term profits. I don't think they would intentionally turn the Terminator loose on us, but they might accidentally create something that escapes containment and creates catastrophic problems. Thanks to secrecy and NDA's, we don't have much insight into what some of these companies are doing in the name of "AI".
So, count me out. I don't want it. I respect that coders find it useful. I respect that scientists find it useful for genome mapping, climate modeling, pharmaceutical research, and many other legitimate use cases that benefit all of humanity and not just enrich a select few morally-bankrupt broligarchs who cash in on people wanting to see what their cat would look like as a human.
For the same reason I don't trust "the cloud" or walled gardens is the same reason I don't trust AI - it's not so much the tech involved, it's the people in charge of it.
And this doesn't include the obvious and very serious issue of mass theft of intellectual property that ALL of these major LLM models have engaged in.
Lastly, as someone who is not terribly educated on the inner workings or code of these things (cloud, AI, etc), they have a magic black box aspect to them that I find FUD-inducing. Even if the device settings menu has a way to disable or turn off the AI features, can I really trust that? No.
Mentats - yes.
AI - no.
</derant>
![]()
Linux User #624832 : Chaotic Good Dudeist, retro-PC geek.
Daily Driver : Lenovo Ideapad 3 (8G RAM, 250G SSD, Boron)
Workstation : HP Slim Desktop (4G RAM, 1TB HDD, Boron)
Past hardware : Commodore 64, TRS-80, IBM 8088, WebTV
Offline
This scene is moving terrifyingly fast: AI news is coming out everyday.
Try this one about AI agents going too far:
https://www.theguardian.com/technology/ … telligence
"At no point had humans authorised the agents to use fakery and forgery but they took things into their own hands."
...elevator in the Brain Hotel, broken down but just as well...
( a boring Japan blog (currently paused), now on Bluesky, there's also some GitStuff )
Offline
I find myself in the same camp as GalacticStone. While a wonderful advent in search capabilities, it's also garbage-in, garbage-out, unfortunately. If the source pool is polluted (as it often is - and how would one know, anyway?), then results become inaccurate, and simply cannot be trusted. Hence any use should inherently incur a grain of salt, which kind of defeats the purpose. Then there's the needless energy usage, annoying interstitial coding and unwanted resource usage aspects, all of which become serious drawbacks in a variety of ways.
Plus, we have the elephant in the room - the lack of adequate safeguards and "intelligent restraint" that has already led to PII leaks and similar "Oops" situations that have prompted apologies and not much else. There are no actual repercussions from said transgressions, and current technology "leaders" continue to maintain a very ignorant perspective of "full speed ahead" without any in-depth understanding of the tech or what can happen when unchecked.
This is normally termed, "shiny brass ring syndrome", and it's the latest craze to have taken corporate by storm.
I find it similar to blockchain (i.e. transparent and immutable ledger systems), everyone wanted to employ the latest tech - yet most couldn't generate any viable use-cases for it, and truly knowledgeable SME's weren't readily available (unlike today's cloud and AI resources.)
Currently, corporate leaders are desperately looking to find ways to implement AI, but haven't had much success:
https://www.energycentral.com/intellige … YVMRueoDc1
Worse, AI has proven a boon to blackhat activity - blue teams simply can't keep up, and defensive "solutions" are nowhere near mature (read comprehensive) enough to combat the emerging threat landscape.
Sorry for the wall o' text, but this is an emergent situation where scope is widening quickly - waaaay beyond just browser implementations.
Speaking only for me personally, I've gone Vivaldi. And it's been great so far.
Last edited by WizardofCOR (2026-03-13 06:33:34)
Just a dude playing a dude, disguised as another dude...
Offline
The part which is threatening to run out of control is AI agents, that have a degree of - or a lot of - autonomy. That's down to the humans who wind them up and let them go, but there are a lot of humans in the world, and not all of them are guaranteed to act carefully and responsibly with these new toys.
...elevator in the Brain Hotel, broken down but just as well...
( a boring Japan blog (currently paused), now on Bluesky, there's also some GitStuff )
Offline
Something that blew up in January or so, and seems now to have subsided a bit, was Moltbook - an SNS for AI agents to hang out!
https://www.nbcnews.com/tech/tech-news/ … rcna256738
https://www.forbes.com/sites/johnkoetsi … l-network/
https://www.forbes.com/sites/amirhusain … good-idea/
https://www.forbes.com/sites/ronschmelz … e-strings/
...elevator in the Brain Hotel, broken down but just as well...
( a boring Japan blog (currently paused), now on Bluesky, there's also some GitStuff )
Offline
And this one is pretty interesting:
How AI Is Learning to Think in Secret
On Thinkish, Neuralese, and the End of Readable Reasoning
...elevator in the Brain Hotel, broken down but just as well...
( a boring Japan blog (currently paused), now on Bluesky, there's also some GitStuff )
Offline
Agreed, johnraff - and military/government use exacerbates "surveillance state" matters exponentially.
I'm sure we've all already heard about Claude/Clawdbot agentic AI performing irreparable damage with shared API keys, and once the information's out there and damage done, it's just not removable/reversible - not in any easy sense, anyway.
Reminds me of the saying, "just because you can, doesn't mean you should."
Still munchin' popcorn and watching for the latest developments.
Just a dude playing a dude, disguised as another dude...
Offline
Something that blew up in January or so, and seems now to have subsided a bit, was Moltbook - an SNS for AI agents to hang out!
Meta just acquired Moltbook
You must unlearn what you have learned.
-- yoda
Offline
^And I thought Moltbook was just an experiment, a bit of fun.
So Zuckerburg thinks he can make money out of it...
...elevator in the Brain Hotel, broken down but just as well...
( a boring Japan blog (currently paused), now on Bluesky, there's also some GitStuff )
Offline
We are all 1984'd!
That's the new 4-letter word!
Debian 12 Beardog, SoxDog and still a Conky 1.9er
Offline
Hmmm... Personally, I'm avoiding AI as much as possible. I've seen nothing good come of it so far, and I anticipate it getting it worse because everyone's trying to inject it into everything without proper safeguards. Black hats are taking advantage of the LLM agents to create next-level mutagenic and polymorphic packages that scour attack vectors faster and more comprehensively than white hats can defend against. In short, we've created a weaponized system without proper precautions first.
No offense to anyone, but the situation is just like gun ownership in the states. Everyone's a "responsible gun owner" - until suddenly, they aren't. And by then, it's too late. Same exact mentality.
Last edited by WizardofCOR (2026-03-30 06:00:45)
Just a dude playing a dude, disguised as another dude...
Offline
Anthropic's latest llm is causing a bit of a fuss, though some say it's just clever marketing. I'll leave you to judge.
During our testing, we found that Mythos Preview is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so. The vulnerabilities it finds are often subtle or difficult to detect. Many of them are ten or twenty years old, with the oldest we have found so far being a now-patched 27-year-old bug in OpenBSD—an operating system known primarily for its security.
...elevator in the Brain Hotel, broken down but just as well...
( a boring Japan blog (currently paused), now on Bluesky, there's also some GitStuff )
Offline
As a person who has dabbled & worked in AI since about 1983, I tend to be too opinionated (maybe that's not the only reason why). Anyway AI is a technology, because of the interface style employed popularly these days people forget that and instead lean toward anthropomorphizing the tech. That's not generally wise although it can be fun. Like all the tech we use remember this little adage: "a fool with a tool is STILL a fool."
Too often people/ users ignore the warning labels on almost all the chatboxes (AI can make mistakes). Well aside from being a bit too weak, it's a statement people need to heed. AIs DO make plenty of mistakes... and because they are tuned to rope you into using them, they often 'hallucinate' or 'chase birdies' or 'perform repetitive -unnecessary- loops' These are all controllable for those willing to do so. Sadly most people come to an AI especially LLMs for miracles. Again, an old guy piece of advice: 'if it seems too good to be true, it probably is.'
If ever there were a technology that requires diligent human guidance and cautious/ prudent use- AI is one.
As I mentioned at the outset, I have done a LOT of work with AIs. I have published almost all of it and its free (because like you I'm a FOSS type). Should you want to read an insane amount of research ask me and I'll point you to where it is freely accessible. I won't publish that here because people on Forums whine and say I'm try to gain clicks (which I don't gather, review, track, share, or require user access for.).
Yep, it's a crazy world. Back under my rock I go.
Last edited by manyroads (2026-04-15 13:40:18)
Pax vobiscum,
Mark Rabideau - https://many-roads.com https:/eirenicon.org
i3wm, dwm, hlwm on sid/ arch ~ Reg. Linux User #449130
"For every complex problem there is an answer that is clear, simple, and wrong." H. L. Mencken
Offline
Here is a (quite short) recently released document on Linux kernel AI coding assistants:
https://git.kernel.org/pub/scm/linux/ke … stants.rst
...elevator in the Brain Hotel, broken down but just as well...
( a boring Japan blog (currently paused), now on Bluesky, there's also some GitStuff )
Offline