Not written by AI
> platform agnostic autonomy in complex, mission-relevant off-road environments
Only a human could come up with such drivel.
3501 publicly visible posts • joined 10 Jun 2009
> the sort of environments typically encountered in current ICE refuelling stations.
Yes. I cannot be the only person feeling a little apprehensive about the prospect of manhandling a 100kW or more power source in a rainy garage forecourt. When it's 10 years old and looking a little ragged round the edges.
> what, 2 MW per charger?
Pretty close, yes.
For argument's sake, Let's start at about 5 miles range per kWh of battery.
So 750 miles range ('pollies for using units that hardly anyone knows about any more) is 150kWh. Add in charging inefficiencies so maybe 200kWh into the charger.
To deliver that in 10 minutes is 1.2MW.
> it is ready to roll out its solid-state-batteries with a range of 745 miles (1200 km) and charge time of 10 minutes by 2025.
Great!
In that case I will delay buying an EV until that battery is in cars.
Oh, and until the charging infrastructure can delivery the aforementioned charging rate. So maybe a little (or considerably longer) time after that.
(Would it be nit-picky to point out that batteries don't have an intrinsic range, it depends entirely in what vehicle they are installed in: an electric scooter or Humvee)
> It still generates junk based on probabilities based on what it was fed
If you genuinely believe that, you should stop what you are doing and read this article in The Spectator.
It will update your information, which seems to be at least a year behind.
> refuse to have anything to do with any media that has any generative content within it
Queen's first few albums included a small notice that reminded us "No synthesisers". That all the music was generated by people playing actual instruments.
However, they later relented and adopted synths.
ISTM that AI will follow a similar path. That those who initially choose to avoid it, will eventually see the benefits and join the rest of the world in using these tools to make better stuff.
> to replace nurses
In the not too distant past, an elderly relative was admitted to hospital.
The admission was routine, not an emergency.
On arrive (I accompanied them) a "nurse" on the front desk took the doctor's letter and then asked for all sorts of personal information: name, address, medications, etc.
When they were sent to a ward to await treatment, another individual sat down with them and proceeded to fill in another paper form, requiring most of the same information.
A day later, before the treatment began ... can you guess the next bit? Yes: a third nurse replicated the actions of the previous two.
This all took place in a major London hospital. With three fully trained medical staff performing menial administrative tasks - and for two of them, wasting that time on repeating what had been done before (and that should have already been on a computer from the original GP's referral)
If AI can eliminate this waste of valuable and scarce medics' time, then I am all for it. However, I suspect that instead, the same tasks will continue to be performed by those with nursing degrees, but this time taking twice as long as they try to navigate badly designed and poorly implemented "islands" of data input, none of which talk to any others.
> Anything that can drive and add more capacity into our healthcare system, I think ultimately is gonna result in a better patient experience."
The big but here is that this is the NHS we are talking about. An organisation of 1.4 million individuals that has never successfully implemented any major IT project
> "While the methods adopted in this work appear to hold promise, there is clearly a great need to incorporate domain expertise in materials synthesis and crystallography."
Which is what you'd expect from a technology in its infancy.
I am sure the reasoning behind Google's paper was as much to publicise their process, as to report loads of potentially usable new materials.
We all know that for scientists, the quantity of publications is often of greater importance than their quality. Since so few papers report world-changing phenomena. On that basis, the Google scientists are engaging in exactly the same behaviour as their detractors. Using the currently trendy keywords (AI and all that flows from it) to bolster their reputations.
Essentially it is an exercise in self-promotion, as so many other publications are.
> The median cost of these breaches, both in the short and long term, stands at £0
Which tells us why the small businesses that make up the majority of cases, took no action.
However this figure is suspect, since the median of a set of values is the middle value. That would imply that while some victims took a financial hit: a cost, others must therefore have made a profit, in irder for the median to be zero.
Another explanation is that there was a typo, or that the analysis, or report, is wrong. Which makes people question what else could be incorrect.
> I'm not sure AI really is great at replacing creatives
Probably not. However, many creatives aren't that creative. Sure they're good at drawing 'n' stuff. But as far as truly original work goes? Those creatives are few and far between.
And if AI can do the donkey work, that might just free up the true creatives to do more actual creating.
It sounds rather imperialistic for one country to impose it's time system on an entire celestial body.
Especially as every country that lands people / machines on planets or moons could equally decide to use their own local timezone (and probably already do). Irrespective of the state of illumination of their station on that surface.
What would be far better would be a globally accepted set of principles that would set the standard, much as exists on Earth.
> the first author to describe an immersive cyberspace, which he outlined in his 1979 novella True Names
So four years after John Brunner's novel The Shockwave Rider
"The future of work in an AI-powered world is on the cloud,"
Was this comment written by an AI (in the cloud)?
Because ISTM that the way AI is moving is from the cloud, onto local instances. Ones that are under the control of the individual or organisation that hosts them, locally. Without the need to leak all their data to some unknown location that does not have their best interests in mind.
> peppered Colin with questions about deeply technical hypothetical mail server issues.
I think we have all done technical consultancy masquerading as interviews.
The problem with freely displaying highly specialised knowledge and then being employed on the basis of it, is that as depth of knowledge increases it is often at the cost of its breadth. Your field of expertise gets narrower and narrower.
In addition, once you have solved the particular issue that the job offer was based upon, what further use does your new employer have for you?
> February 29
Also, it's the day many people don't get paid for working.
If you are paid weekly, then the extra day just forms part of the normal working week and employers pay their staff for the leap-day.
But if you are paid monthly, then you get the same amount fof February 2024 as you got for February 2023 (assuming no intervening pay rise - an increasingly common complaint). Even though you work an extra day in 2024.
> prevent the sale or transfer of Americans' sensitive personal information and government-related data
At this point I suspect the baddies are either climbing back into their chairs after a good ROFL, or scratching their heads in confusion.
In both cases wondering why anyone would go to the expense of buying this data when it is easier to simply take it, for free, from unsecured sources.
> Firstly. We have to throw away backwards compatibility again
And that is where it all falls apart.
Nobody except computer science students buy platforms simply because they have a novel architecture. The real world buys computers to get stuff done. And as such academic articles about the neatness of the architecture get short shrift.
If it was even a little bit important, nobody would have gone down the Intel / 80x86 branch, but would have stuck with the "cleaner" addressing modes of the 68000. Yay! let's revive the architecture battles of 40 years ago.
But they didn't. The decision makers with the money to spend chose backwards compatibility. They will always do the same again. The biggest reason being that nobody would trust an emulated layer, without testing all their code and data against it and fixing the inevitable gaps.
The americans were quite happy to talk about their Strategic Defence Initiative forty years ago. Yet there wasn't a mention of it in the article. Did I miss it?
And it appears they had no qualms about treaties, legalities or other philosophical limitations.
Although it was never implemented (as far as anyone can tell) I suspect than in the decades since then, the plans have been refined, the technology updated - just how stealthy can you make a satellite? - and the countermeasures improved.
> I really can't think of any features I use now which I didn't use then
Yes. Whenever I hear about a new release of anything: O/S's, apps, browsers ... anything, the only question I have is what will I be able to do, that I cannot already do?
And far too often the answer is nothing.
> F-35 Jet Fighter Helmet
The difference is that an F35 is sold as having a service life until 2070 while if this is a typical Apple product, it will be pulled before 2030.
Though I expect the last 40 years of the F35 will be as target practice for the next generation of pilotless, stealth, hyper-agile fighters.
> If users on multiple support threads are correct ...
A good question. But one the article fails to answer. And until someone who knows, does so this is just a long piece of suppository supposition
The great thing about FOSS is that it is at least possible for independent operators to answer such questions.
> "The fact that PRC hackers are targeting our critical infrastructure, water treatment plants, our electrical grid, our oil and natural gas pipelines, our transportation systems — and the risk that poses to every American requires our attention
The more worrying issue is that they can!
What are "critical infrastructure" elements doing on the wild west internet in the first place?
What provides cheap and convenient access for remote operators provides exactly the same for bad actors.
While "diversity" is fun and gives the impression to being good, it acts as a drag on software development.
When each version of an O/S has a UI that needs it's own special coding to get stuff done then it rapidly becomes expensive for software makers to support all those variants.
Likewise, when every user is running their own special, customised, desktop, Linux version or hardware, then support for all that stuff becomes a nightmare too.
It seems to me that the operating systems that are successful: that have non-negligible market share, are the ones that offer few or no, options for variation. We see the same thing in mass production of consumable durables: few options, but cheaper and selling more - than for bespoke, custom, boutique products where every one of the is different from every other.
> we feel that the Linux desktop world badly needs more diversity of design.
I understand the idea. But ...
People go to a zoo to see all the animals and to marvel at the diversity, their different shapes, sizes and habits. But that doesn't mean they would want any of them in their home.
ISTM what people want from either a domestic pet, or a Linux desktop is something clean, well-behaved and easy to look after.
I realise there are always a few "look at me" types, who feel the need to display their individualism by choosing something exotic - just like all the other individualists do. However, the lessons of which O/S's are successes and which have marginalised themselves by offering a slew of alternatives - each one sufficiently different from the others to be a right PITA to learn, program and maintain - those lessons are clear for all to see,