back to article Server vendor has special help desk for lying, incompetent sysadmins

Welcome again to On-Call, our festive Friday frolic through readers' recollections of jobs gone bad. This week, something a little different from reader “DB” who says “I do server hardware warranty support for a known enterprise server vendor.” DB's been at it for 20 years and says he's spent his career working on PCs, …

  1. Anonymous Coward
    Anonymous Coward

    Well I've had my time with clueless techs too. I worked for HP ES for sometime.

    We were administering servers all around the world and had to rely on onsite techs or Sun/Dell/HP foot soldiers for any actual repairs. Just some of the things:

    - One Sun engineer stole interconnect cable since as he told us there were two of them

    - HP engineer unplugged the only good disk in 2 disk mirror in order check the serial number, in his defense he said that he was going to plug it in really quick...

    - HP engineer unplugged the 2nd power cable on some system while removing the 1st power supply, he was resting his hand on the 2nd one

    - Dell engineer went on site and said he will need to take out the MB and get a new one form local storage 2hrs later we noticed that he took the MB from the wrong system

    We had fun times too. I remember one engineer refused to go to remote Canadian DC since the security guard was on sick leave and there were bears around.

    So my point is that it goes both ways, incompetence is not specific to sys admins or system engineers.

    1. Slackness

      Errr.... Is that you Ross?

      *Joke*

      1. Captain Scarlet Silver badge

        Re: Errr.... Is that you Ross?

        Erm, hasn't everyone unplugged a secondary PSU that is on whilst taking out a failed PSU?

        In my defence I had a PSU half out when the cable for the second fell out of the cable management, I went to put it back , but pulled the cable out followed by the PSU dropping on my foot and lots of swearing.

        1. DubyaG

          Re: Errr.... Is that you Ross?

          I'm not one for proprietary connectors, but this looks like a job for a twist-lock style. If you pull that one out, you deserve what you get.

        2. Anonymous Coward
          Anonymous Coward

          Re: Errr.... Is that you Ross?

          A little while ago, my boss and another long serving sysadmin were at our largest customer's office the other week, looking at installing some new core routers.

          One of them leant on the rack, and somehow knocked out the power to the entire thing. Every single server and device in the rack went offline. Apparently it all went very quiet (and if you've done something similar, you'll remember that terrible silence, as if hundreds of fans had suddenly stopped).

          It took them the rest of the morning to clean up the mess. We've not let him live it down yet :)

          1. Stoneshop
            Flame

            Re: Errr.... Is that you Ross?

            Apparently it all went very quiet (and if you've done something similar, you'll remember that terrible silence, as if hundreds of fans had suddenly stopped).

            Even scarier: a low rumble in the computer room soundscape goes absent, and after a few seconds of puzzlement, trying to determine what exactly it might be that's causing the change, it hits you that it's the aircon. And there's close to 100kW of equipment still running, belching heat into the room.

            I think it's the only time I've seen a thermograph pen move.

    2. Anonymous Coward
      Anonymous Coward

      Mistakes sometimes happen, but some people are just dangerous. I have the bad habit of not letting anyone touch my machines without someone knowledgeable with the systems overseeing them.

  2. Anonymous Coward
    Anonymous Coward

    Developers....

    I was lumbered with supporting developers for a range of graphics card years ago. Admittedly, very badly as I'm not a programmer.

    Anyway, used to get calls from them claiming that their boards were faulty because their programs didn't work as expected. My usual response was mostly "Well, if the supplied demo programs/sample code works, then the hardware is fine."

    Never had any calls back, maybe I did a better job than I thought?

  3. Mr Dogshit
    WTF?

    "Microsoft Certified Support Engineer"

    I'm pretty sure that's not what MCSE stands for or has ever stood for.

    Secondly, I don't even understand the article. "I work in support and at some time in the past I had to deal with people I didn't like". Don't get it. At least give us a funny story about a user who thought the CD-ROM drive tray was a cupholder.

    1. Anonymous Coward
      Anonymous Coward

      Re: "Microsoft Certified Support Engineer"

      You are correct, it is "Must Consult Someone Else" ...

      1. Ben Boyle

        Re: "Microsoft Certified Support Engineer"

        Or indeed Must Consult Somebody Experienced

      2. This post has been deleted by its author

      3. Fatman
        Joke

        Re: "Microsoft Certified Support Engineer"

        <quote>You are correct, it is "Must Consult Someone Else" ...</quote>

        NO!!!

        NO!!!!

        NO!!!

        It is:

        Microsoft

        Certified

        Shutdown

        Enigneer!!!!

      4. Anonymous Coward
        Anonymous Coward

        Re: "Microsoft Certified Support Engineer"

        No no, it's "Minesweeper Consultant and Solitaire Expert". I should know, I am one.

      5. FatGerman

        Re: "Microsoft Certified Support Engineer"

        “Memorization geeks who'd passed tests, and gave the appearance of being experts when in fact, they didn't know how to do anything.”

        Yep. Interviewed a few of those. Some answers to problem-sloving questions I posed - where I was looking for a methodical approach to demonstrate an ability to think - include

        "Which module was that in?"

        "I don't think we covered that."

        "I could probably answer that if I had the books to hand"

        The jobs went to the non-certified fresh science graduates who were (a) cheaper and (b) not thick.

    2. Knewbie

      After delivering some servers..

      the hardware delivery boy pressed the "Fire Suppression Gas Release, only use in case of emergency" switch while racing on the trolley with his friend and taking support on the wall for an extra speed boost.

      The DS8000 that was located just under one of the hoses suddenly lost 80%+ of its disks...

      Also, we got it on the security camera...

      1. Anonymous Coward
        Anonymous Coward

        Re: After delivering some servers..

        Two of my ex--colleagues were once working on a rack that were returned after surviving a fire that was extinguished using powder, they were checking it. One thought it was a good idea to turn on one of the Dell blade enclosures (with enough powerful fans to make a 747 envious) when the other was behind it....

    3. Peter Stone
      Happy

      Re: "Microsoft Certified Support Engineer"

      No, MCSE actually stands for;

      Minesweeper Champion & Solitaire Expert

  4. Doctor_Wibble
    Devil

    Wrong sized hammer

    People just think you hit it with a hammer, but there is an art to assessing which hammer to use and exactly how much excessive force to apply. And which sacred chant to speak, how much blood is required etc.

    Or you can have a hardware problem that you thought was a software problem because you didn't get a blue screen or a kernel panic and that checksum error when uncompressing the massive file just delivered is only a patch away or you should have used supersuperultrazip instead of ultrasupersuperzip which is just for amateurs. And until it was figured out the poor sod was feeling like the only person on the entire planet who was incapable of unzipping a file. Beer therapy fixed that though.

    1. Anonymous Coward
      Anonymous Coward

      Re: Wrong sized hammer

      People just think you hit it with a hammer, but there is an art to assessing which hammer to use and exactly how much excessive force to apply

      After switching an important server back on after a scheduled power-off, its main data disc refused to spin up. The head of IT came into the server room to see me holding a hammer next to the hard drive which was balanced in my hand. He asked if I really was going to hit the HD with a hammer. "Yes" I replied as his face started turning white. A light tap on the side of the HD was enough to loosen the bearings and allow the disc to spin up.

      1. TitterYeNot

        Re: Wrong sized hammer

        "A light tap on the side of the HD was enough to loosen the bearings and allow the disc to spin up."

        Let me guess - a 90's half height Seagate Barracuda, probably 4 GB. Those things nearly always seized up when powered off when they were failing. Out of curiosity I took a broken one apart (it was out of warranty so was just going in the bin) and the thing had 11 platters! No wonder the poor motor sounded like it was crying when trying to spin that lot up to speed...

        1. Stoneshop

          Re: Wrong sized hammer

          My preferred method is to give the recalcitrant drive an abrupt twist about its platter axis, and only if that doesn't work get out the Manchester screwdriver.

        2. Phil O'Sophical Silver badge

          Re: Wrong sized hammer

          Let me guess - a 90's half height Seagate Barracuda, probably 4 GB

          I'd have guessed a Quantum 105MB as fitted to early SPARCstations. If powering one of of them up didn't work the trick was to lift the front of the pizza box ~ 2" and let it drop back onto the desk while powered. Last resort was 4", after that it was time to replace the disk.

      2. Doctor_Wibble
        Thumb Up

        Re: Wrong sized hammer

        > A light tap on the side of the HD was enough to loosen the bearings and allow the disc to spin up.

        I can also recommend a pair of (normal-sized) pliers held by the nose is just the right weight and the handle rubber just enough to enable just the right amount of tap without having to be too fussy about over-enthusiasm and/or frustration. It's also rather more discreet than a hammer when in an environment where non-technical management is more likely to fire a person than order a replacement HD.

        In a box somewhere I have one labelled 'clout to start' and another marked 'manual twirl', a fun thing to try in a cramped case with that barely-reaching IDE cable without dislodging anything else or shorting it when dropping the only accessible screw that's required to keep the thing from falling on the fan.

        The person who invented that mousetrap game definitely had a hand in some of the more 'innovative' case designs that are out there.

        1. Anonymous Coward
          Anonymous Coward

          Re: Wrong sized hammer

          And if you don't have those precision tools on hand: We had the "7 Inch Rule"

          You could drop the hard drive from progessively higher distances until you hit the maximum.

      3. abedarts

        Re: Wrong sized hammer

        Waaay back in the day, our massive 4Mb (I think) full 19" rack sized HD was so heavy it used to take two of us to get it in and out of its rack. It would have needed a light tap from a sledge hammer to have had any effect!

    2. PickledAardvark

      Re: Wrong sized hammer

      I diagnosed a POST failure where the "flywheel" on a 5.25" floppy drive was snagging on the drive tray. I couldn't have fixed that without a hammer-like tool.

    3. Ben Boyle

      Re: Wrong sized hammer

      AKA "Percussive Maintenance"

    4. Fatman

      Re: Wrong sized hammer

      <quote>People just think you hit it with a hammer, but there is an art to assessing which hammer to use and exactly how much excessive force to apply. And which sacred chant to speak, ...</quote>

      I had an old Micropolis full sized 5-1/4 inch 145Mb unit seize. The customer's most recent backup was more than 2 months old, and there would have been a shit load of work lost if I could not recover that data. I told the customer that anything done since the last backup will be gone if I don't manage to get the drive functional, and they MAY get lucky if I can unfreeze it. BUT NO GUARANTEES I even got that in writing to protect my ass!!!!

      To replace that oversized beast, I had procured a replacement hard drive (a Quantum Fireball 540) and I had formatted it, and loaded the O/S and the backup onto it. I got it to boot. At this point, the customer could be back up, albeit having lost more than two months worth of work. I powered the box down, and completely unplugged the drives power and data cable, and removed it from the case. I did not want any "accidents" fucking up the hard work I just completed.

      I borrowed a small 4 oz brass hammer and deftly whacked it on its long side close to the motor spindle. (i.e. Force applied perpendicular to the motor shaft.) I then connected the power and data cables, and fired up the box. The damn thing spun up. The customer, trying to get out of paying for the repairs exclaimed: "You fixed it!!!, We don't need the other drive."

      I reminded him that this was only a temporary fix, in an attempt to recover the lost work, and it is highly likely that this drive will fail again. I jam a 150Mb QIC into the tape drive, and make a backup; then I made another one, and finally a third.

      I re-installed the replacement, booted it up, and restored the backup over the old files. I let the customer check the data, and all of what they expected was there. Whew!

      The customer insisted on me reconnecting the failed hard drive because he didn't believe that the 10 year old drive had failed. To humor him, I did so. I powered it back up, and the motherfucker started to smoke. God damn was I lucky.

      He had stalled on paying the bill, so the company ended up suing him, and won. He refused to pay, and I contacted one of their office staff, and was able to find out where they had their bank account. A garnishment order took care of collecting that judgment. That shithead got blacklisted by us, and his company name made the rounds. I wonder who was unfortunate enough to ever make the mistake of doing IT work for him in the future.

  5. Anonymous Coward
    Anonymous Coward

    Not just IT

    I was on a tour of a call center, and the manager was explaining the different teams in the call center. He mentioned that they had a team just to deal with difficult or abusive customers. (These poor souls had extra training to deal with the abuse they received from clients) He said clients who were flagged for special treatment would get transferred automatically to this special team based on their CLI.

  6. TRT Silver badge

    I've just had a document delivered...

    from our Problem Management manager, who says it's going into the induction pack for new staff. It includes the line "Problem Management is a process within the IT Infrastructure Library (ITIL) Service Operation lifecycle stage."

    Which is undoubtably wins the prize for the biggest pile of wank used to mean "This is something we do" ever.

    1. TRT Silver badge

      Re: I've just had a document delivered...

      It seems our Problem Management manager also reads El Reg.

      Defining a problem as being the "underlying cause of an incident" (1) isn't helpful and (2) in the event of the incident being "I can't get my iPhone to connect to Eduroam" defines the End User as the problem.

    2. TRT Silver badge

      Re: I've just had a document delivered...

      Apparently ITIL is a thing. Like a Investors In People and Project Management PRINCE thing. Never heard of it. Not good to assume people will know what you're on about. Anyway, long and short of it, is somebody is selling "common sense" again.

  7. Anonymous Coward
    Anonymous Coward

    So DB has spent 20 years supporting useless and incompetent SysAdmins...

    DB can you tell us what you would do for a living without these people ?

    1. Anonymous Coward
      Anonymous Coward

      Playing with words when I should be working.

      A "somewhat useless and semi-incompetent SysAdmin" :-)

      See how that worked.

      AC 'cause I am.

  8. Anonymous Coward
    Anonymous Coward

    I've worked in various roles in IT and I can tell you that it is your employers (the IT Vendor's) fault:

    1) Sales Reps tell customers that their system is so easy to operate that trained monkeys can do it.

    -- > The Customer hires monkeys and pays them peanuts

    2) The IT Vendor's Channel Sales organisation intentionally turns a blind eye on Partner Training and Certification, because there's a really big deal on the table they want to win.

    -- > The partner stuffs up the install and has no clue how to fix the issue.

    -- > Vendor SE is ass covering, referring to the specifications the Partner provided

    -- > The Vendor gets dragged into the mess and the problem ends up with support

    3) Vendor Support is initially unable to exclude a software bug or hardware fault

    -- > Vendor Support ends up validating the customer's solution architecture

    -- > Finally somebody needs to stick his/her head out informing the customer its neither SW or HW

    -- > Vendor Support Management gets upset and now also hires monkeys to match the skill level of customer and Partner

    4) Weeks later the issue is still not resolved

    -- > After a fingerpointing exercise the vendor finally drags in their Professional Services

    -- > PS needs to be paid, funny money is moved around and Marketing funds will be diverted to pay for flights and hotel

    -- > PS translates technical matters into blame which gets divided proportionally between vendor, partner and customer. (or whoever is stupid enough to hang themselves)

    5) Sales and Partner Sales take customer Management out to dinner

    -- > Customer gets pampered with a nice meal and drinks, vendor pays

    -- > Sales and Partner Sales a celebrating their success and compare wrist watches

    -- > vendor Support and Idiot Sys Admin stay up all night working to resolve the issue

    -- > Management want regular updates

    1. Fatman
      Joke

      RE: Support Checklist

      <quote>

      5) Sales and Partner Sales take customer Management out to dinner

      -- > Customer gets pampered with a nice meal and drinks, vendor pays

      -- > Sales and Partner Sales a celebrating their success and compare wrist watches

      -- > vendor Support and Idiot Sys Admin stay up all night working to resolve the issue

      -- > Management want regular updates

      </quote>

      You forgot:

      -- > Vendor Sales and Partner Sales reps reap a huge bonus for 'saving the company's ass'.

      -- > Vendor Support and Idiot Sys Admin get their asses reamed for """incompetence""".

      1. We Haven't Met But You're A Great Fan Of Mine

        Re: RE: Support Checklist

        Finally ...

        -- > The issue gets resolved by vendor support

        -- > an email gets send around to thank everybody involved in resolving the problem

        -- > the cockroaches come out from underneath the rocks - because they all want to be the messenger of good news and inform the customer that the issue has finally been resolved

        1. John Brown (no body) Silver badge

          Re: RE: Support Checklist

          "-- > an email gets send around to thank everybody involved in resolving the problem"

          ...and conveniently misses out the frontline "soldiers" who actually did the work. Bonuses all around for everyone who doesn't actually do anything.

  9. Anonymous Coward
    Anonymous Coward

    Way back in the day, I was working Main Frame ops for a large global corp.

    After a newly changed program failed on system start for the day, the on call manager advised all would be okay. 4 hours later he called to ask what his advice had been, after realising his mistake much hilarity ensued as the whole system had to be restored and taken offline for the rest of the day.

  10. Sgt_Oddball
    Devil

    In the defence...

    Of clueless sysAdmins - some of us hunger for sysAdmin jobs, others have it thrust upon them.

    On a related note, it also really helps if there's actually some worthwhile documentation on how to configure and run said hardware that requires configuration files being spoken in arcane runes - backwards....through a RS-232 cable via a half functioning laptop (kept around just in case of such things) whilst offering small blood sacrifices to your deity of choice...

    There's a reason I never touch networking gear........

  11. Trident

    Back in the late 70s I was a 1st-year undergrad. at a UK uni.

    An RK05 drive (which held a removable disk platter) holding the final year projects crashed, the DEC engineer was called to repair it. No one was worried, there were two backups.

    The first thing the engineer did was to take one of the backups, put it in the drive to see if the problem was the disk or the drive. The head in the drive had crashed so it obviously destroyed the first of the backups.

    The next thing the engineer did was to properly repair the drive and put the blank disk he'd brought with him in it to verify all was well. It was.

    And then as we only had the one backup remaining, he decided to copy the backup to the blank drive so we'd have two backups.

    Unfortunately he didn't write-protect the source disk, fat-fingered the copy command, and copied the blank disk to the last remaining backup.

    I still have the original platter which I retrieved from the rubbish bin. The head crash had gone through the oxide right down to the base metal. I keep the platter on my desk to constantly remind me of the importance of backing up correctly....

    1. Anonymous Coward
      Anonymous Coward

      The mistake was allowing the DEC engineer access to the backups, and use one to experiment...

      1. Phil O'Sophical Silver badge

        Always mount a scratch monkey: http://edp.org/monkey.htm

    2. Wensleydale Cheese
      Happy

      Not easy to ignore then

      "I keep the platter on my desk to constantly remind me of the importance of backing up correctly...."

      RK05 disks had a diameter of 14 inches (PDF)

      1. John Brown (no body) Silver badge

        Re: Not easy to ignore then

        "RK05 disks had a diameter of 14 inches (PDF)"

        So, they make a nice cake stand? Or a Lazy Susan? :-)

    3. John Tserkezis

      "I still have the original platter which I retrieved from the rubbish bin. The head crash had gone through the oxide right down to the base metal"

      I've seen one that went through the magnetic layer, and all the way though the aluminium base in a section (Yep, you could see right through). Somehow, it was still balanced enough to be spinning.

      What stumped us, was not that the drive failed in this manner, but that it was in this failed state for long enough to gouge the platter without anyone noticing.

  12. Mephistro
    Happy

    In the middle of the noughties...

    ... I had a call from a company that had contracted one of these "external support services" for their smallish - ~25 seats - shop. After suffering some issues with their file servers, they called "support" and spent several hours on the phone, to no avail. After several days of tests and checks, the customer called me to be their "interface" with the support service, as the guy in charge of that was more of a power user than a professional IT guy, and paid accordingly (peanuts, monkeys, management cutting corners, ... you know the drill).

    I went there, studied the issue for a while and told the customer I could fix it myself, but the customer insisted I call the support service, as that would supposedly save time.

    To put it short, their setup included two servers, one of them a Windows 2003 server and the other a Debian file server. The support guy wasn't even aware of the existence of the Debian server (or the existence of Debian OS :-). To add insult to injury, all the bushwhacking with the Windows server had caused more issues. At that point, I told the support guy that I needed to leave the premises for several hours and would contact him later. I was lying through my teeth :-).

    I went straight to the management and explained them the issue, i.e. that the "support service" they had hired was basically a bunch of noobs with scarce knowledge of Windows servers and no knowledge at all of anything else IT related. My Unix/Linux was a little bit rusty back then, so I had to google the fixes for the Debian server. It took me another three hours to fix everything.

    After being profusely patted in the back by the customer, I advised them to drop the contract with the "remote support". To this day, I still use this as a cautionary tale against those scams "remote support services".

    More recently, I had another funny experience where the support service was trying to use Teamviewer to fix a problem with a raid array. I had to explain the poor sod at the phone that you can't access the system BIOS using Teamviewer. :-D

    1. A Non e-mouse Silver badge

      Re: In the middle of the noughties...

      To be fair, the small businesses that use these support companies do so because they have no IT skills themselves and can't judge if the companies they're using are sound.

      1. Mephistro

        Re: In the middle of the noughties...

        That's true, but one would expect the managers or owners to perform some due diligence, find some geek consultant with good references and ask (and pay) for his/her advice. If you are unsatisfied with said advice, you can ask for a second opinion. To a different guy, of course! ;-)

        Alas, even doing this would probably give them a 50-50 chance, as lots of the people who claim to be geek wizs shouldn't be left alone with any piece of machinery more complex than a baby rattle. ;-)

  13. Anonymous Coward
    Anonymous Coward

    Clueless consultant

    A few years ago a company I worked at engaged a consultant to carry out a health check on a UNIX platform. The platform had virtual instances which had LAN and SAN provisioned via redundant server instances on each box. The vendor consultants ran a script to health check each of the IO instances and noted as he ran the check on each server the SSH session disconnected. He carried on to run the script on all instances and then said these will run for a while and left for the night.

    The reason the SSH session disconnected was a bug in the script that crashed the instances. This cut all the IO to the running guests in the platform. The guests suddenly lost all their LAN and storage and the net result was every production system went down with prejudice. The calls from customers were not long in starting and the escalation was immediate.

    The situation was made worse by the fact the lead sysadmin was away on holiday and a call was made to see if he could come back in to help sort the mess out. He did turn up and sorted out the mess. What was less clear was why the vendor consultant kept running these scripts when the first attempt caused his SSH session to terminate, normal people would think that was a little odd.

    1. PickledAardvark

      Re: Clueless consultant

      My colleague installed a software suite which I will not name. It is a complicated beast -- a SQL server, lots of file servers with load balancing -- and the documentation is lousy. However there is a "state of health" app which we used to test the configuration.

      First run, after checking that everything was OK: FAIL. So we contacted the local support provider who was unable to identify a root problem (after weeks) but elevated us to the manufacturer support team -- which was outsourced. The outsourcers were out of their depth and quickly flipped us over to the real manufacturer support team. Who could not find a problem.

      Eventually somebody found the author of the "state of health" app who determined "Oops, that's a bug". Sorry folks for wasting a couple of months, but your implementation is fine and I should have got it right.

      I don't know how to fix the disconnect between having a real problem and finding the person to fix it. Having worked in support roles, I know the many interpretations of problem.

      I do know that if IT providers carried the cost of IT failure, real problems would be fixed more promptly.

    2. Anonymous Coward
      Anonymous Coward

      Re: Clueless consultant

      If the company you worked for had their shit together they wouldn't need an external consultant for health checks.

      Sometimes companies do these health checks as a feel good exercise, most often after a bad experience (outage etc)...

      However, a health check is nor a substitide for patch-, performance- and capacity management.... That's an ongoing effort ...

      The good thing about the cloud is that providers will at least do the performance and capacity management for you and your detailed bill will show you exactly the resources you are consuming.

  14. KroSha

    Morning off for outsourcing

    I used to work for a major UK "Service Provider". One day I get a call to a data centre down in Wapping. The contract requires response within 4 hours, so I get down there sharpish.

    Seems that a new guy had started on the Security team. He beeped into the data room, with his pass and checked whatever he was supposed to be checking. On the way out, he didn't realise that he also need to beep out, and pressed the big button next to the door; the Big Red Button.

    Taking the whole room down fried multiple servers and there were a good half dozen guys in the room, all trying to get clients' kit back up.

    1. Stoneshop
      Facepalm

      Re: Morning off for outsourcing

      On the way out, he didn't realise that he also need to beep out, and pressed the big button next to the door; the Big Red Button.

      Someone who shouldn't have been in the computer room in the first place managed to hit the Big Red Button high up on the wall, two meters away from the door, instead of the Average Size Blue Button next to the door, trying to get out.

      And there was the install team for the no-break setup. After doing a test run on the freshly-installed diesel before adding the generator and other stuff, they found that the "Engine Stop" button wasn't functional, and hit another promising-looking button nearby. Of course it was Big, and Red, and this one was hooked up.

  15. Jim-234

    Our company sold a fairly beefy server to a "customer" via eBay.

    A little over a month later, the "customer" was complaining to eBay that the server was defective and not working right. Of course they refused our requests to call them up on the phone and troubleshoot the problem (and of course refused to provide logs).

    They claimed the server kept shutting down.

    So of course as all things go with eBay's habit of not usually caring about seller's side of the story, they demanded that we give them a refund and pay for the return shipping, so we are then stuck with quite a bit of costs involved in shipping said heavy large box both ways.

    Get it in & look at the logs and every hour there is a entry saying OS requested shutdown.

    Boot it up to the OS, remove their password so we can get in.

    Discover they had a "evaluation" edition of Microsoft software installed that the "evaluation" period had expired and so every hour it was telling the system to shut down.

    We sent them a strongly worded e-mail that if they can't "evaluate" products properly, don't blame the hardware.

  16. Anonymous Coward
    Anonymous Coward

    Supplying logs...

    It really pisses me off that I have to supply logs to some tool in India every time BEFORE vendors will actually ship a replacement disk. There's a disk. It's failed. What the hell do you think the logs will tell you? And if there's a flaky disk, I don't want to mess with it on a production mission critical system trying to get it to work, I want a new known good one - quit wasting time and get me a replacement disk within 4 hours of my original call! If I don't send the logs before the end of the day, then fine, bill me for the new disk...

    1. Anonymous Coward
      Anonymous Coward

      Re: Supplying logs...

      In one place I was, we had to take turns on disk swapping, he had a few thousand disks and often the result for a disk failure swap request was sorry sir we can't send you a disk, policy is only to have 4 disks in transit and sir you seem to have um, errr 39? Yes last week was a bad week, when are you coming to collect them, we need 7 more today. Sir we can't send you a disk until this has been escalated, escalated to who? Argh argh, years later still so angry from those calls.

  17. raving angry loony

    MCSE hatred

    Local employer of "enterprise" class only hires Microsoft "knowledgeable" people who can answer questions such as "what is the delay of a certain command when issued, in microseconds". And other such memorization exercises.

    Oddly, this "employer" has seen not just hours but DAYS of downtime in a mission critical (health) environment, regularly has people on site who don't have a clue what they're trying to fix, resulting in even more local downtime, and is pushing a computer based health records system that is quite literally killing people as it changes prescriptions because they don't match what's in their database. It's generally considered to be the most incompetent I.T. group in the region. Which is unfortunate since they run the hospital I.T. systems. Oh, but they're ALL "MCSE", so that's OK.

  18. Anonymous Coward
    Anonymous Coward

    MCSE's and Unix Admin's are equally bad...

    It's 2016 and we still see those stupid arguments against MCSE's that may have been funny 20 years ago....

    Reality is that employers don't shower you in money anymore just because you can bang out Unix commands that extend over three lines and a bit of scripting and piping.

    People that have seen the mainframe era where processes, skills and staff were mature know that Unix and Windows are just two sides of the coin.

    Those open systems (Unix and Windows) offered a cheaper, but often flaky alternative to robust systems and processes.

    These days people call themselves "senior systems engineer" after having spent two years of fixing printer paper jams and now think they're hot shit because they managed to install linux on their desktop.

    No organisation wants to rely on those people to run their critical processes. Whatever can be procured as a service will go to the cloud. Proprietary business critical systems will be run in house and supported by people that know what they're doing.

    Mission Critical stuff (e.g. banking) still runs on mainframes and maybe Unix for the Web front-end...

    1. Truckle The Uncivil

      Re: MCSE's and Unix Admin's are equally bad... ...as mainframe engineers

      Sir, you display your ignorance. Being a veteran of times such as you describe, I suggest that you could not be more false. I remember your portly mature staff and how little they contributed. It was still usually some underpaid geek who found and fixed the problems (and got no credit of course).

      There were many, many mainframes that could not be used without Unix since Unix ran their IO controllers (in separate boxes). It was partly that that caused the great increase in mainframe throughput in the late sixties and early seventies. If Unix was flakey then the mainframes would have been thereby flakey.

      I agree that people in the industry are not knowledgeable enough but that has ever been the case. Nothing changed there. Nothing will unless management distinguishes between technicality and personality and which they need.

      Just as a thought, what operating systems are used by supercomputers these days? Or bitcoin miners? Driver assist?

      You seem adverse to the swift and agile. Perhaps another dinosaur complaining about the all mammals running around and eating their eggs.

      I cannot see why people want to go back to "batch" lives when they can get close to real time. But I don't have to see that. At least not until I am convinced they exist. Go buy a smartphone...

      1. Anonymous Coward
        Anonymous Coward

        Re: MCSE's and Unix Admin's are equally bad... ...as mainframe engineers

        I'm not ignorant of progress at all and Mainframes have not been perfect either. However, going back to the context of this discussion which is - the decline in support quality - I think you are missing the important points.

        The decline of support quality or skills has little to do with MCSE's, Unix Admins or untrained users.

        The commodisation of x86 also introduced a shift in support and service delivery quality. I have seen this shift over 20 years.

        During earlier times there would be a lot more brains in the field (Onsite engineers). Increasingly there was a shift whereas the field engineers became dumb drones remote controlled from the call centre.

        In other words - there is no support experience anymore because support is now transaction based.

        In its extreme form there is a split between customer empathy and actual technical skill. A good example are the call handlers that apologise over and over - until you finally get somebody on the phone who is a subject matter expert, hours later.

        In addition, most IT vendors advertise how well they Integrate and Partner with other Technology vendors. There are roadshows and Partner events where customers get lulled in - but once you log a support ticket the first thing the engineer will try to do is eliminate his employers kit as the source of the problem.

        Surely you could buy that Premium , Platinum Star Collaborative Support Contract - but that's about as useful as aftermarket car seat stain protector.

        I like X86 and the effort people put into Open Source Software, however Support Quality has declined.

      2. grs1961

        Re: MCSE's and Unix Admin's are equally bad... ...as mainframe engineers

        > ... could not be used without Unix since Unix ran their IO controllers (in separate boxes). It was partly that that caused the great increase in mainframe throughput in the late sixties and early seventies

        Really? In the sixties? And the seventies? UNIX systems running IO controllers on/for mainframes???

        I'd like to know who has/had the flux capacitors to deliver them!

        ISTR in the 90s IBM putting RS6000s in front of mainframes to manage terminals on a line-by-line rather than screen-at-a-time basis, and support stuff like FTP, but they still had 3270s to do the real work. :-)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like