There is nothing like...
...feeling secure. Too bad so many things will never be secure.
645 publicly visible posts • joined 21 Jun 2007
...Intel talks about them a lot. But for the past few years most GPU's on the market have had Tensors cores built in. They are specifically designed to run the neural networks inside of AI models. So I would argue any gamer with Tensor cores in their GPU already has an AI PC. Welcome to the future.
...like turning off security features to make the kernel smaller (how many distros used in IoT turn off apparmor?), or not locking down TCP port 23 (telnet), or even warning users that are trying to use ssh over an unencrypted connection (all implementations ever).
The low hanging fruit here is really low. Yet a neophyte or an overworked admin either won't know or care.
...because all that is needed is to make account take overs unprofitable for Antisocial Networks. That might be a step towards turning them into Social Networks again. However, as long they can make money the easy way and avoid all obligation of protecting the people they use on a daily basis this will continue to happen. At this rate we'll be lucky if ever gets upgraded to a game of whack-a-mole.
As is often the case I once again must point out the difference between the sizes of source code and compiled code.
Yes, 7 Very Floppical Magicy Things. No, Control or Customization.
If you had just the source code for that OS you still could not build it. And if you can't build it you can't control it.
...the Gentoo package manager, I would need to download 314,375 KiB to compile LXQT 1.4 on a system. 233,048 KiB of that is because they are using KDE's Oxygen icons. By way of comparison, Firefox itself is 495,690 KiB of source code. So when people say it's lightweight, they mean it. I haven't used it quiet a while though because I have been stuck in a rut with XFCE, but I'll spin up VM to test 2.0 when it get closer to stable.
...This client library received unprecedented load conditions caused by devices coming back online all at once. As a result of increased demand, it mixed up device ID and user ID mapping and connected some data to incorrect accounts."
Wyse is perhaps the worst software, hardware, and business model to ever exist. The lowest bidders get to bid on who the lowest bidder is.
Fair to whom? As I said, things have changed over time. So I don't see the need to call foul over apples vs. oranges. It is evolution in progress. I don't know where it will wind up but I do think that we are losing more than we gain when the code base for any complex enough web browser must contain a poorly implemented OS. Hopefully with time this problem will be solved.
...You could download the Mozilla Firebird beta and it was less than 1 MB in size. After about 20 years no one has been able to match the wonderful lightweight experience that it game the user. Times have changed and the internet along with it and operating systems too. So in the coming years I can see myself moving on from Firefox. But only when I have to. Even now I have other browsers installed though I only use them for certain tasks. Maybe someday browsers can go back to being small tools the do simple things but I don't think that's likely. They will just be replaced by things that mask the bloats and memory usage as part of the OS.
...because a lot of the data used by scientific Distributed Computing projects through BOINC comes from things like this. Milkyway@home is using data from the Sloan Digital Sky Survey to build the most accurate map of our galaxy, Einstien@home uses data from radio telescopes (MEERKAT, Arecibo when it was active, etc.) plus LIGO to investigate objects of extreme gravity, and of course LHC@home is using real world data from CERN to not just process data from experiments but also model experiments before and after they are run.
My point is that there is going to be so much data that it will take decades to work through. Some scientists are even going back to data gathered by the Pioneer and Voyager missions and applying modern computing techniques so see things we couldn't see in the past. The money saved by the science teams and the quality of the results they get from volunteered processor time speak for themselves. And this a good thing in the era of carbon credits.
This is why I like keeping things on a central file server accessed with a stupidly fast network. I can access my pics or my games or some other part of my data and even though the laptop might only have 4GB of RAM the iSCSI server has 128GB plus a dedicated 256GB NVMe filesystem cache. The flash storage and the RAM are faster than my 10Gbit network and those are much faster than the SATA SSD in the laptop.
...will need more than hardware solutions. Most Generative AI models a person can run on their own hardware use CUDA or OpenCL for inference and not the Tensor Cores that have been built into the latest generations of GPU's. Keeping the total power draw and the waste heat to a minimum will likely involve making sure computers are using every software and hardware optimization available. Of course the Extra Oomph we are seeing could just be "Window's Dressing" and most users could find themselves being billed for cloud services when little Timmy asks his computer to write an essay about the dangers of plagiarism.
...Now let's hope someone doesn't set fire to it. While this is only a single step it is at least a step in the right direction. If generative AI can get to the point that minimizes risks for the artists, the people that train the models, and those that use the models then perhaps new art can flourish that otherwise would never have been seen or read. Even though AI is currently very powerful and able to somewhat mimic human imagination by doing math with artistic styles it can't imagine on it's own. Just provoke the imagination of it's users. Which makes it a worth while tool. Hopefully the Nash Equilibrium is reached and there is a phase transition.
Then AI might become boring like putting people on the moon and we can get on with the next phase of hype.
...But why in the hell is so much of it stored at all? Wouldn't we be better off with a small attack surface and a smaller bag of goodies for the bad guys to steal? The thing that really bothers me is almost everyone reading this already knows what I mean and there isn't much measurable effort being made to communicate the urgency to the average end user. Windows isn't the only place the bloat puts people at risk. On my Linux desktop I try to avoid the truth about the /home/user/.* directories by telling myself "Hey at least you are using some encryption here." Which wouldn't matter if a bug like this could by used by the new in-kernel SMB server. It's not likely that I am risk of much, but I know I have to be at risk of something.
...Towards making the Stable branch actually stable. In the vast majority of cases people will at most need to adjust their "USE flags" which govern which features are enabled when compiling a given package. See https://wiki.gentoo.org/wiki/USE_flag for docs if you have never tried it. One of the problems with a source based distro is that if a file becomes corrupt or if a system's software is years out of date there may not be a clear and easy path to get the system back to a stable and updated state. So for years Gentoo has offered the "tinderbox" or unoptimized binaries that can be downloaded to help recover a system. However these only covered the base system and not even the most commonly used programs in the Portage tree. I see these new binaries as not only a time saver but also a simple and effective way for neophytes to troubleshoot. If a program is misbehaving, just download the binaries and try to run it. It it still fails then you either have a deeper problem or the problem is something that needs a bug report.
Personally I don't see myself using it but it's really nice to know it's there. If a person is running lower end hardware and wants to get up and running ASAP this is a wonder path forward. Otherwise you need to do some extra (although not too hard) work. For example my home server even has a carefully curated Gentoo VM that serves as a remote package compiler for my laptop via distcc. Both approaches are elegant solutions in my eyes.
...or even slightly surprised but no one ever said that humans directly employed by StabilityAI reviewed every bit of potential training data. Who in their right mind would say "I'll take all the money you are willing to give me so that I may become intimately familiar with the worst of the worst of the worst content the Internet has to offer?"
Then again, who in their right mind would train AI's with uncured data sets? Yes not just curated, but cured, like an XMAS ham.
To be fair and fully disclose relevant info, I use Stable Diffuse a few times a week and I am glad that I do not own any of the stock. These things need guard rails for their guard rails.
It takes time to maintain things, even digital data constructs. Human effort billed in fractions of a life time. By not fixing it they get a teenie tiny bit more profit. If you get rewarded for cutting corners more often then you punished for it punished it becomes profitable.
Quid Pro Quo.