Some Witty Tagline Goes Here

I'm working on some changes, please stand by. Click here for my popular Passive FTP document.

November 10, 2008

Any Technical Trainers out there?

Filed under: Careers, Certification, IT Industry, Training — admin @ 12:36 pm

Over the past six months a large portion of my job function has been to develop courseware and then use that material to teach, for both internal employees and customers.  (This was the primary reason I was in Japan in May/June).

While I think I’ve been doing pretty well at it, I’d love to be able to further polish my skills in the area.  To that end, I tried doing some research on the Web on this topic.  Unfortunately, I didn’t find all that much of use.

I’d like to hear from other readers out there of resources available to the technical trainer, both on teaching techniques and courseware development.  Keep in mind that the material I am teaching is designed for highly skilled sysadmins and SAN and network admins, so info relating to this kind of high-end topic would be beneficial.   Information on professional groups or organizations that cater to the field would also be appreciated.

March 27, 2008

Dell Signs OEM Agreement with Egenera, Inc.

Filed under: IT Industry, Virtualization — admin @ 11:16 am

There was a flurry of activity earlier this week by bloggers and reporters as Dell announced it was partnering with Egenera, Inc. to OEM Egenera’s PAN Manager virtualization management software on its PowerEdge servers.

The original press release from Dell can be read here.

Egenera, Inc. is the leader in a market segment that IDC has begun calling “Virtualization 2.0″, or as Egenera defines it, “Data Center Virtualization.”  Until now, Egenera’s PAN Manager management software was only available on their high-end BladeFrame hardware platform.  With the Dell announcement, Egenera has begun to expand its PAN Manager software framework to other platforms.

Industry reaction to this announcement has been overwhelmingly positive:
Virtualization Journal announces that Dell Steals Virtualization March on HP & IBM, while Forrester Research blogs that the Dell-Egenera Partnership Shores Up Both Companies in Virtualization Market.  Ideas International also gushes over the partnership and questions whether this partnership might extend to other hardware vendors?

The aforementioned Ideas International story essentially sums up the the value of the Dell-Egenera partnership with the following statement: “PAN Manager allows IT managers to create an entire virtual datacenter
where nothing is tied to physical hardware. Compute, storage, and
network resources can be dynamically allocated when needed and where
needed…With PAN Manager, Dell leaps over many of its competitors with the
ability to create the virtualized datacenter of the future today using
inexpensive industry-standard components.”

March 13, 2008

Why Have More Companies Not Embraced 64-bit?

Filed under: IT Industry, Linux — admin @ 3:47 pm

It seems to me that 64-bit computing is the wave of the future with very little effort to adopt, yet from where I stand, not as many companies are going the 64-bit route as I would have thought.

Nearly every processor sold, if not every processor sold, is of the AMD64 or EM64T (IA-32E) architectures.  Memory prices are continuing to drop making it more and more common to see x86 (or technically x86_64) based systems with 32GB, 64GB, or even 96GB of RAM.

Through Physical Address Extensions, (PAE), 32-bit processors have long been able to address more than 4GB of physical memory, given proper OS support.  The addition of just 4 bits of memory additional addressing allows a 32-bit processor to support up to 64GB of RAM.

In Red Hat Enterprise Linux (hereafter referred to as RHEL), support was added for up to 64GB of memory in RHEL4 via the hugemem kernel.  The RHEL4 Release Notes state that the hugemem kernel provides a 4GB per-process address space and a 4GB kernel space.  It is also noted, though, that running the hugemem kernel will have a performance impact as the kernel needs to move from one address lookup table to another when switching from kernel to user space and vice-versa.  It is not stated in the release notes, but I have heard conjecture that the performance impact could be up to 30%.

RHEL5, the latest major release of Red Hat Enterprise Linux actually removes support for the hugemem kernel.  32-bit RHEL5 will support at most 16GB of memory.  See the  RHEL Comparison Chart for details.  I cannot find specific references as to why hugemem was removed in RHEL5 but I have heard that the performance impact of hugemem was a hassle to deal with from a support perspective.  (Not to mention the assertion that 32-bit is dead!).

So, if a user is running 32-bit RHEL4 with greater than 16GB of memory their upgrade path to 32-bit RHEL5 is limited by the 16GB maximum in RHEL5.  One would have to do a fresh 64-bit install of RHEL5 to take advantage of the increased memory.

64-bit on the other hand, can address a full 2TB of memory.  There is also no longer the distinction between LOWMEM and HIGHMEM for the kernel.  The elimination of the 1GB/3GB split (or the 4GB/4GB translation with hugemem) increases the stability of the kernel when dealing with memory intensive loads.  I’ve seen many cases where 32-bit systems with 8, 12, or even 16GB of RAM have fallen over because the kernel can no longer assemble contiguous blocks of memory fast enough from its limited 1GB address space, while HIGHMEM sits with many GB of useable memory pages.

In cases like this, a migration to 64-bit has nearly always resolved  the issue with kernel memory starvation.  (In a couple of cases there was a runaway app that consumed  every available page of memory on the system, so there was no difference between 32 or 64 bit.

The moral of the story?  Go 64-bit with any new server implementations.  Begin putting plans into place now to migrate legacy 32-bit systems to 64-bit in the near future.  Having a solid, actionable plan will go a long way to ensure a smooth transition.

March 11, 2008

Is There Really an IT Labor Shortage?

Filed under: Careers, Certification, IT Industry — admin @ 1:51 pm

Slashdot recently linked to an article titled Is There Really an IT Labor Shortage? which argues that the alleged IT labor shortage is merely an argument of convenience for various companies.  The article also questions whether the skills shortage is really just that or a result of unrealistic hiring practices.

While I cannot disagree with some of the reasoning behind the argument of hiring practices, I’d like to propose another angle on the ‘labor shortage’ theory–too much ‘book knowledge’ and not enough critical thinking among IT job candidates.

My company has recently been looking for qualified support personnel.  The typical support engineer we hire has 7 to 10 years of industry experience in UNIX/Linux, Windows, Networking, SAN, or combinations thereof.  These are high level positions for high level people.  Granted, the ’support’ portion tends to scare away a good number of people from the outset. 

I tend to be involved in either the first level (phone screen) or second level (1st face to face) interviews.  Not only do most of the candidates not have strong enough technical skills, but of those that do, or appear to (according to their resume), all are lacking one critical skill–troubleshooting.  Troubleshooting, or more simply put, thinking for oneself, is a very basic and critical skill that seems to be lacking in the majority of candidates I speak with.

I don’t expect every candidate to know the answer to every technical question that I ask.  However, I do expect that they will able to admit that they don’t know, and tell me how to go about looking for the right answer, and how to attempt to break an unknown problem down to a more basic level by asking the right questions.  Problem definition is a critical part of troubleshooting and something that our organization tries to instill into our engineers.

For example, with one recent batch of phone screens, I provided a simple scenario, warning beforehand that there was no single correct answer.  The question was as follows:

“If you received a [call | email | trouble ticket] from a customer who reports that users of his Oracle database on Linux are reporting slowness, how would you go about troubleshooting this issue?”

Not a single candidate was able to provide a set of problem definition steps to my satisfaction.  Most attempted to solve the problem right off the bat by spouting off possible solutions to an unknown problem.  (I’d check the server’s disk space!  I’d look at subnet masks!, etc.).  I was hoping to hear logical questions to try to narrow down the problem, such as “Is the problem new?  Does it only occur at certain times of the day?  Are all users affected?  Are all queries affected?  Were any changes made recently?, Define slowness, How are you measuring slowness?, etc.”

Where does this lack of critical thinking come from?  I’m not entirely sure, but I think part of the blame can be laid at the feet of the myriad industry certification programs.  The bulk of the certification program keywords that you see tossed around only push people to know what is on the test.  No attention is given to troubleshooting, to trying to solve problems in a methodical fashion.  Not all certifications are like this–the CCIE and RHCE are two that I can think of off the top of my head that are lab-based, rather than ‘multiple-guess’.

I don’t know what the solution to this whole problem may be.  It almost seems like a self-perpetuating problem.  Companies put out ads for IT people with certifications X, Y, and Z, except those certification programs are not producing qualified candidates.  I for one would like to see a end to employment ads requiring a candidate with a bunch of 3 and 4 letter acronyms after their name.



 

Website Design & Maintenance by Erika Stokes