20 things to consider when deciding on the structure of your IT organization.

At some point in most organizations, the decision is made to centralize and/or standardize Information Technology Services. This need for centralization and standardization arises from the complexity that comes with increasing size and the difficulty of managing an environment that has multiple moving parts—many in different directions.

The desire to take control of an environment that is considered in “disarray” is a strong one, and in many cases, it’s not a bad idea. However, having been on both sides of this debate, I have discovered some truisms about changing your IT structure that you might want to ponder before making a final decision:

1. Totally centralized, totally decentralized, or a hybrid IT environment can all work—it just takes good top management, a robust set of plans and an IT framework to pull it off. If you are going to insist on centralizing IT, you better be prepared to be flexible and provide superior customer service. One thing that decentralized environments tend to excel at is customer service—because they are closer to the customer and often are run by the customer. Therefore,
2. if you are going to take IT functions away from the other departments, be prepared to deliver service like they did.
3. Standardization does not have to mean centralization. It means that all parties agree to abide by a set of standards.
4. Forcing standards down people’s throats is like taxation without representation. You are inviting people to rebel. Form a governance committee where users have a voice.
5. Standards are not always black and white and they need to be reviewed frequently.
6. Technology changes rapidly and standards that don’t change with them will soon become hated mandates.
7. Piggybacking on the point above, try to build IT environments that are flexible and can accommodate new and changing technology.
8. Don’t use standards as a lame excuse for not being open to new ideas and innovation.
9. Setting a standard for a product such as a laptop and then giving users one configuration choice is not really a choice, nor is it customer friendly.
10. Listen to users’ needs and make sure your standardized choices can meet those needs; if not, your standards are worthless.
11. Just because a departmental IT operation is small does not mean it is insignificant. Often, they are working better and smarter than central IT and are providing better customer service.
12. Unless you are staffed for it and are extremely customer-focused, allowing users no control will lead to end user frustration.
13. IT support/helpdesk and the rest of your IT operation need to communicate often.
14. Communicate, communicate, communicate—about your plans, about your problems, about threats, current trends, etc. Don’t treat your end users like mushrooms; they will hate you for it and will not support you.
15. At budget time, you will hear nasty rumors floating around regarding your IT organization, whether they are true or not. Best to thwart those by abiding by the rule above.
16. Never forget that the IT organization is there to help the business work better, smarter, faster, cheaper…it is not enough just to keep the lights on.
17. An IT organization without standards can be a management nightmare and extremely wasteful. But an IT organization whose standards are too rigid, tends to be out of touch.
18. Communication will aid any type of structure you choose—and the structure you use will help determine the kinds of communication you need to employ.
19. Great technologists do not necessarily make the best managers.
20. No organizational structure can completely make up for bad management.
Having said all of that, my experience has been the happiest when running or being a part of a hybrid environment. Some IT services are best managed as a centralized service while others are left decentralized - although, I have seen the extreme in each work well or very poorly.
In most cases, as long as your users are getting good service and have a voice in the operations, most don’t give a hoot how IT is structured. However, if you stop delivering good service, you will start to feel pressure to move in the opposite direction, as users will clamor for change in order to get better service.

Understand and control the centralization cycle

One of the most consistent systems I’ve discovered moving from organization to organization is the "centralization cycle.” Companies move from highly centralized to highly decentralized IT organizational models, sometimes consciously, but often as a result of political forces. I’ve worked with CIOs struggling to keep their centralized models together, and with regional IT managers trying to break apart unresponsive monolithic organizations. No matter what part of the cycle you are on, it’s easy to get lost in the political, social, and organizational struggles and lose sight of the big picture.
From a participant's view, the cycle looks like a giant political and budgetary struggle. Egos and vested interests collide in a battle for political capital. Employees tremble as the great forces in their organizations confront one another, rewriting budgetary authority and lines of management. Or they cynically watch, well aware that this time next year everything will be rewritten again.
From an outsider's perspective, things are a bit less chaotic. In fact, despite the formal statements of various organizations there seems to be a natural cycle at work forcing the change. Different companies have different rhythms; one organization I worked with changed every five years, one every 10, and another every two—and each company's cycle was fairly regular. That last one had some of the most stressed employees I've ever encountered. But though the staff changed, sometimes radically, something held the company on a steady course.
Sources of the cycle
Since the cycles had predictable intervals, each company involved had to have some kind of consistent, guiding influences that set the context. Given the personal nature of the struggles, I strongly doubted that it tied to either a vision statement or other changeable business artifact.
When the revelation came to me, I was between jobs, so I took a bit of time to do research. What I discovered seemed obvious in retrospect.The most important factor in many companies’ centralization cycle was the business cycle of their market. It seemed to vary, however, whether a company centralized or decentralized during good times. Companies in multiple markets tended towards an average cycle, with different divisions centralizing and decentralizing at different rates.
The next most important factor was how credit for profit is allocated. IT (by its nature) rarely shares in the credit for business success but also correspondingly avoids the blame for failures. When there are few failures but many successes, credit accumulates in the extremities. When there are failures, that credit is expended, allowing the IT department to wrest control of budgets away from the outside factors. If, however, the IT department is organizationally the scapegoat for problems, it correspondingly gains in power when things go well.
Another guiding factor was how quickly the organization adopted changes. The easiest measurement for this came from the identification of new "change initiatives" and how deeply they affected the company. Companies whose management embraced change, but with very conservative structures, changed slowly. Less conservative cultures changed more quickly, whether the management wanted them to or not. In this case, IT centralization was just one of a host of changing systems, oscillating along with the rest of the organization.
The final factor I found, although it was present in only a few of my clients, was in the internal promotion systems. In those organizations where people could be promoted to central IT from the local organizations, the transition from centralized to localized IT seemed to carry much fewer ramifications. In these organizations the "transition" was largely a matter of budgetary chest beating; the actual decision making occurred through much more cooperative informal channels.
Turning this to our advantage
Other than pure sociological interest, what does this kind of analysis tell us? Does it suggest ways we can tune our organizations? Provide us with a unique tool for preserving the power of centralized IT despite whatever pressures might override us? Or are we doomed to forever writhe in the hands of a cycle we can’t control, taking credit and accepting blame for things that really have nothing to do with us?
The first thing it does is allow us to achieve some perspective on our situations. Although the losses and gains may feel distinctly personal, they are also part of a larger system of behavior within the organization. We can use this information to set aside our egos and address what our organization needs, not what we want.
The idea that centralization is a cycle also allows us to logically consider whether what we want meshes with the cycle of activity. Trying to push for centralization during a decentralization phase of the cycle may simply be burning political capital for no gain. If the forces at work are large enough (like a combination of corporate organization and market cycles) we can easily find ourselves ground underfoot. Rather than pushing for full centralization in such times, we need to concentrate our resources on protecting our "key features"—whatever it is that provides our particular IT organization with the highest ROI.
Finally, the idea that we are dealing with a cycle allows us to investigate the factors in that cycle in our own organizations. If we can work out the factors influencing the transition, we can more accurately focus our efforts. This allows us to step back from the limited arguments about a particular project or budget and focus on changing the influences themselves. In some cases (like a market cycle), we have no real control over the influence. In others (like promotion policy or organizational culture), we may have more influence than we think.
By formalizing and researching the idea of a cycle for IT centralization, we provide ourselves with a theoretical tool to help with our overall strategy for our organizations. We also create a context in which we can think about specific initiatives in terms of their applicability.

Counterpoint — five reasons to decentralize your IT department

I extolled the benefits of centralizing your IT department last week, and now I’m going to provide the counterpoint: the top five advantages you can gain by decentralizing your IT department.

5. IT is a smaller target for budget cutsDecentralizing primarily involves taking parts of the IT department - for example, software engineers for custom projects or help desk professionals - and assigning them directly to a department or business unit. This leaves a smaller group of professionals in the central services wing of the IT department. One of the advantages to this is that IT is not such a huge target when it comes time for budget cuts, and the IT workers in the business units are much more closely tied to revenue and so less likely to be viewed as expendable.
4. Less bureaucracy to manageWith a smaller group of IT professionals in central services, there are typically fewer groups, less hierarchy, and less political in-fighting. All of that adds up to less bureaucracy for IT leaders to manage, which means more time can be spent on developing effective IT strategies.
3. Projects get done fasterWhen you have developers, engineers, and architects tied directly to the business units, they tend to need fewer meetings and less communications in order to get on the same page with the stakeholders on the business side. That’s because they work more closely with the business side on a daily basis and typically report up through the business leaders of the division. This type of streamlined communication can lead to projects that get done much faster and more efficiently.
2. Achieve better IT/business alignmentWhen business unit leaders have IT professionals and IT teams who are part of their department, they tend to demonize IT far less. And when IT pros are part of a business unit or department (in a large organization), they often do a much better job of learning the business and finding the technologies that can enhance it.
1. Increase responsiveness to users and customersThe number one value proposition is speed. Requests don’t have to go into a central queue and then wait for the appropriate and/or available technologist to handle the request. Business leaders can work directly with the technologists in their business unit to solve problems, make changes to a project, tweak plans, make purchases, etc. This often results in much higher internal satisfaction with IT. For some businesses, this can also translate directly into higher customer satisfaction due to the perception of increased responsiveness.

Assemble the perfect system administrator’s toolkit

The Job

You’ve been in IT for the past 15 years. The IT manager of a big firm, you manage a team of 10 IT staff that serves the in-house needs of more than 500 employees, and you know you do a great job at it.

After another day hard at work planning the new PBX migration project, your mobile phone rings. It’s your CEO on the line. There’s a problem with his home PC, which refuses to boot. He needs to retrieve a critical document from it for a keynote presentation the next day. He lives down the road from you.

So what do you do now?

A) Tell him you’re an IT manager, and you don’t do PC servicing anymore.
B) Tell him that you’re at as much of a loss as he is.
C) Tell him not to worry and show up at his house an hour later with the team leader.
D) Tell him not to worry and that you’re be right over in 5 minutes yourself.

If your answer is option A, B, and maybe even option C, then I suggest you head down to Toni’s excellent Career blog for some advice on getting a new job.

If your answer is D, then perhaps this Right Tool post is for you.

Sometimes, there’s no other way but to rollup your sleeve and get your hands dirty. Nothing beats being prepared, however. To help you along, I have put together a list of items that you can assemble into your very own system administrator survival toolkit.

The list is presented in no particular order.

As you might have noticed by now, today’s Right Tool post is somewhat different. Instead of the tool, I’m presenting you with a list of 20 tools that you might want to consider throwing into your own system administrator’s toolkit. (Come on, you know real IT pros builds their own kits.)

Cable tester
Portable labeler
Bluetooth mouse
Anti-static strap
Releasable cable ties
Portable hard disk drive
Encrypted USB flash drive
Crimping tools
Hard disk wiper
Hard disk to USB adapter
USB hub
RJ11 cable
Patch cables
Multimeter
Screwdrivers
Multi-plug adapter
Original disc media
Serial to USB adapter
RJ-45 extender
Wireless modem

The Right Tool for the Job?

How well does this lineup represent your needs? Please let us know what you would put in your toolkit. And yes, it should be something you can lug around relatively easily, so you can leave out that 42-U server rack and SAN array.

Ensure basic Web site security with this checklist

While I normally advocate a principles-based approach to maintaining system security – and deplore the typical “best practices” checklist approach – that doesn’t mean that security checklists are without value. Employing a security procedures checklist is only the first step toward securing a resource, a means of aiding your memory before you apply your critical thinking skills and imagination to the problem of improving on the checklist in each individual case. Sometimes, a checklist can be useful in affecting workplace security policies as well.

A number of far-too-common security failures on Web sites and Web servers are addressed here. Because of the frequency of these poor security practices, it strikes me as important to gather good practices that address these problems in one place and to make them publicly available to Web server administrators, Web developers, and Webmasters. For those of you who haven’t considered all these factors in managing your Web resources, I recommend dealing with what you have left unconsidered as quickly as possible.

For those whose management has proved resistant to suggestions for improving security in these areas, or who simply need help in composing a message to management that will make your point clearly so that it isn’t misunderstood, I hope you find the following checklist of Web security practices helpful.

Login pages should be encrypted: The number of times I have seen Web sites that only use SSL (with https: URL schemes) after user authentication is accomplished is really dismaying. Encrypting the session after login may be useful — like locking the barn door so the horses don’t get out — but failing to encrypt logins is a bit like leaving the key in the lock when you’re done locking the barn door. Even if your login form POSTs to an encrypted resource, in many cases this can be circumvented by a malicious security cracker who crafts his own login form to access the same resource and give him access to sensitive data.

Data validation should be done server-side: Many Web forms include some JavaScript data validation. If this validation includes anything meant to provide improved security, that validation means almost nothing. A malicious security cracker can craft a form of his own that accesses the resource at the other end of the Web page’s form action that doesn’t include any validation at all. Worse yet, many cases of JavaScript form validation can be circumvented simply by deactivating JavaScript in the browser or using a Web browser that doesn’t support JavaScript at all. In some cases, I’ve even seen login pages where the password validation is done client-side — which either exposes the passwords to the end user via the ability to view page source or, at best, allows the end user to alter the form so that it always reports successful validation. Don’t let your Web site security be a victim of client-side data validation. Server-side validation does not fall prey to the shortcomings of client-side validation because a malicious security cracker must already have gained access to the server to be able to compromise it.
Manage your Web site via encrypted connections: Using unencrypted connections (or even connections using only weak encryption), such as unencrypted FTP or HTTP for Web site or Web server management, opens you up to man-in-the-middle attacks and login/password sniffing. Always use encrypted protocols such as SSH to access secure resources, using verifiably secure tools such as OpenSSH. Once someone has intercepted your login and password information, that person can do anything you could have done.

Use strong, cross-platform compatible encryption: Believe it or not, SSL is not the top-of-the-line technology for Web site encryption any longer. Look into TLS, which stands for Transport Layer Security — the successor to Secure Socket Layer encryption. Make sure any encryption solution you choose doesn’t unnecessarily limit your user base, the way proprietary platform-specific technologies might, as this can lead to resistance to use of secure encryption for Web site access. The same principles also apply to back-end management, where cross-platform-compatible strong encryption such as SSH is usually preferable to platform-specific, weaker encryption tools such as Windows Remote Desktop.

Connect from a secured network: Avoid connecting from networks with unknown or uncertain security characteristics or from those with known poor security such as open wireless access points in coffee shops. This is especially important whenever you must log in to the server or Web site for administrative purposes or otherwise access secure resources. If you must access the Web site or Web server when connected to an unsecured network, use a secure proxy so that your connection to the secure resource comes from a proxy on a secured network. In previous articles, I have addressed how to set up a quick and easy secure proxy using either an OpenSSH secure proxy or a PuTTY secure proxy.

Don’t share login credentials: Shared login credentials can cause a number of problems for security. This applies not only to you, the Webmaster or Web server administrator, but to people with login credentials for the Web site as well — clients should not share login credentials either. The more login credentials are shared, the more they tend to get shared openly, even with people who shouldn’t have access to the system. The more they are shared, the more difficult it is to establish an audit trail to help track down the source of a problem. The more they are shared, the greater the number of people affected when logins need to be changed due to a security breach or threat.

Prefer key-based authentication over password authentication: Password authentication is more easily cracked than cryptographic key-based authentication. The purpose of a password is to make it easier to remember the login credentials needed to access a secure resource — but if you use key-based authentication and only copy the key to predefined, authorized systems (or better yet, to separate media kept apart from the authorized system until it’s needed), you will use a stronger authentication credential that’s more difficult to crack.

Maintain a secure workstation: If you connect to a secure resource from a client system that you can’t guarantee with complete confidence is secure, you cannot guarantee someone isn’t “listening in” on everything you’re doing. Keyloggers, compromised network encryption clients, and other tricks of the malicious security cracker’s trade can all allow someone unauthorized access to sensitive data regardless of all the secured networks, encrypted communications, and other networking protections you employ. Integrity auditing may be the only way to be sure, with any certainty, that your workstation has not been compromised.

Use redundancy to protect the Web site: Backups and server failover can help maintain maximum uptime. While failover systems can reduce outages due to server crashes (perhaps because of DDoS attacks) and server shutdowns (perhaps because the server was hijacked by a malicious security cracker) to mere hiccups in service, that isn’t the only value to redundancy. The duplicate servers used in failover plans also maintain an up-to-date duplication of server configuration so you don’t have to rebuild your server from scratch in case of disaster. Backups ensure that client data isn’t lost — and that you won’t hesitate to wipe out sensitive data on a compromised system if you fear that data may fall into the wrong hands. Of course, failover and backup solutions must be secured as well, and they should be tested regularly to ensure that if and when they are needed, they won’t let you down.

5 of the best desktop operating systems you never used

Bill Gates’ original dream when he created Microsoft was to have “a computer on every desk and in every home, all running Microsoft software.” Clearly, he accomplished that goal. Depending on whose statistics you want to believe, Windows has a market share in the high 80% - low 90% range. So, unless you run Linux or prefer Mac OS X, chances are you’re a Windows user.

When it comes to desktop operating systems, your choices are really pretty narrow. You either run Windows, or you do some Unix-like OS. There are the 12,000 different Linux distributions. There’s always FreeBSD if you prefer your Unix without a Finnish flavor. You could go the vendor route and run AIX or HP-UX. Sun has Solaris, and as much as you might want to, you can’t forget SCO. And of course, there’s always Mac OS X. Although it may sound like variety when it comes down to it, it’s still Windows vs. Unix.

There are other options, or at least there USED to be. Here are a list of five of the best operating systems that you probably never used

OS/2

No discussion can be had of Microsoft alternatives without mentioning OS/2. Until Microsoft shipped Windows 2000 Professional, OS/2 4.0 was probably my desktop OS of choice. For the purposes of this section, I’m referring to OS/2 2.0 and later, not IBM and Microsoft’s ill fated OS/2 1.x series.

IBM billed OS/2 as being a “Better DOS than DOS” and a “Better Windows than Windows”. Anyone who ever ran OS/2 knows that IBM largely succeeded. From a technical perspective, OS/2 was much more solid than DOS, Windows 3.x or even Windows 9x.

OS/2 had many innovations that we come to view as standard equipment in an OS today. OS/2 was the first major 32-bit operating system. It was completely multi-threaded. Its HPFS file system resisted fragmentation and could natively support large filenames. OS/2 was the first major OS to integrate a Web browser into the operating system. It was also the first operating system to offer voice-control.

There are many reasons why OS/2 failed. Windows 95 came out and even though OS/2 was more stable, its inability to run Win32 API-based programs doomed it. It ran DOS and Windows 3.1 programs so well, ISVs never had an incentive to create native OS/2 programs. Microsoft’s licensing scheme with OEMs discouraged hardware vendors, including IBM itself, from bundling OS/2. It didn’t help that IBM couldn’t market OS/2 to save its life.

Even though the last version of OS/2 shipped in 1996, IBM continued to support OS/2 until December 31, 2006. Many OS/2 supporters have tried to get IBM to release OS/2’s source code for open source development, but IBM refuses. Supposedly this is due to some of the Microsoft code that still exists in OS/2 that IBM has exclusive rights to. At the same time however, IBM licensed OS/2 to Serenity Systems who continue to support, upgrade, and extend OS/2 in their own product called eComStation.

One final bit of OS/2 trivia. Microsoft co-developed OS/2 1.x with IBM. When IBM and Microsoft got ‘divorced’ in the late 80’s, Microsoft took its part of the code for what was to become OS/2 3.0 on the IBM/Microsoft product roadmap and created Windows NT 3.1, which today lives on as Windows Vista and Windows Server 2008.

NeXT

The NeXTSTEP OS is one that even I never used. It came up in conversation with Jason Hiner who had used it while a student at IU. NeXTSTEP has a important place in history that can’t be overlooked.

Today, Apple is Steve Jobs and Steve Jobs is Apple. You can’t really think of one without the other. It wasn’t always that way though. In 1985, in grand Greek Tragedy form, Steve Jobs was forced out of Apple by John Sculley, the executive that Jobs himself brought in from Pepsi to save Apple from financial disaster. When Jobs left Apple, he went on to form the NeXT Computer Company.

NeXT’s initial goal was to create powerful workstations for education and business. The NeXT workstation’s major innovation at the time was its 256Mb WORM drive that it used for removable storage rather than a traditional floppy drive. The NeXT came with the entire works of Shakespeare on a single CD-ROM which was one of the ‘cool factors’ about the box when it was introduced. The NeXT workstation also continued Job’s history of thinking different when it came to design, because the NeXT workstation was a simple Borg-like cube.

At the heart of the NeXT workstation was the NeXTSTEP OS. This OS was based on the Mach Unix kernel. It was originally developed for NeXT’s PowerPC CPU, but Jobs also created a version of it that ran on the Intel 486 CPU called NeXTSTEP 486.

NeXTSTEP is significant because when Jobs finally retook his rightful place as the head of Apple in 1996, he did so by arranging Apple to buy NeXT. In doing so, the NeXTSTEP OS came along as part of the package and ultimately became Mac OS X.

BeOS

The BeOS was an interesting, powerful, and probably the most jinxed OS that was ever created. It debuted in 1991 and some of its innovations such as a 64-bit journaling file system in BFS, still haven’t found their way into current operating systems.

BeOS came very close to becoming the operating system that we use on the Mac platform today. BeOS started out as an proprietary operating system for the BeBox which was a workstation that ran PowerPC CPUs. When the BeBox failed to go anywhere in the marketplace, Be tried to sell the company to Apple to replace MacOS, which by 1996 was starting to show its age in the face of Windows 95. Apple nearly did it, but decided to buy NeXT and bring back Steve Jobs as mentioned above.

Be then continued its desperate bid to find a home and purpose for the OS. It started by trying to peddle BeOS to the makers of Mac-clones who were cut off from Apple when Steve Jobs returned. That didn’t work. (Yes, in the mid-90’s you could actually buy clones of the Mac. Apple licensed the OS and the Mac ROMs to OEMs. One of Steve’s first actions upon getting back in at Apple was to squash the Mac-clone market.)

Be then tried to port the BeOS to the Intel platform and get some traction against Windows. That didn’t work either. Be next tried to create a version of BeOS for Internet appliances. When that failed as well, Be sold out to PalmSource who wanted to include BeOS technology in their next OS. Guess how that turned out? PalmSource subsequently crashed and burned, selling the rights to BeOS to Access Co, a maker of mobile devices.

I never used BeOS other than to install it and kick it around a little to see how it worked. I have a copy running in Virtual PC on my test machine, but due to limited hardware support of the
virtual machine environment, BeOS won’t come up in color and won’t talk to the network card.

DESQview

The last two I want to mention aren’t really operating systems per se, but rather operating environments. But, if Windows 9x can qualify as an operating system, so can these. The first is DESQview.

DESQview was a program that ran on top of DOS that allowed you to multitask DOS programs. As a matter of fact, until Microsoft introduced Windows 95, with the exception of OS/2 the best way to run multiple character based DOS programs was through the use of DESQview.

DESQview didn’t multithread programs, because such technology didn’t exist at the time. Rather, through the use of QEMM, DESQview used expanded memory on your computer if it had an 80386 CPU to run DOS programs simultaneously. If you only had a 286, you couldn’t use expanded memory, but DESQview would still task-switch programs through extended memory. It wasn’t as efficient as running on a 386, but it still got the job done.

Of course, Windows 3.x could multitask DOS programs. Compared to DESQview however, Windows 3.0 it had so much overhead, that it was slower and often wouldn’t leave enough lower 640Kb memory behind for DOS programs to run. If you had enough extended memory in your computer, QEMM, DESQview’s memory manager, could actually free almost the entire lower 640Kb memory area for program use.

DESQview was one of the first victims in the PC tradition of Good Marketing Beats Better Technology. Even though DESQview multitasked DOS programs better than Windows, Microsoft ultimately won the day. Quarterdeck, the maker of DESQview, tried creating a GUI-version of it called DESQview/X, but this never went anywhere. Ultimately, Quarterdeck sold out to Symantec. Symantec still owns the rights to DESQview, but doesn’t market it.

I used DESQview extensively in college. Even on a 80286 without QEMM, you could still multitask programs very well using DESQview. Unfortunately, I couldn’t find my copy of DESQview to grab a screenshot for this blog post. I’ll see if I can find it and get one.

GEOS / GeoWorks

In early 90’s if you wanted to get on the GUI bandwagon and didn’t want to use a Mac, your only choice was really Windows 3.0. But to make Windows 3.0 work properly, you really needed to have 386 with EGA or VGA graphics. If you had an ‘older’ computer, you were pretty much out of luck. That’s where PC/GEOS came in.

GEOS was a GUI that ran on Atari and Commodore 64 computers. In 1990, GeoWorks created a version of GEOS called PC/GEOS which would support a GUI and limited multitasking on 286 and even some XT machines (8088-based PC clones). GEOS was lightweight, fast, and easy to use but never got traction from software developers because it was hard to program for and the developer kit was expensive.

GEOS included Ensemble which was its own office suite program consisting of a word processor, spreadsheet, dialer, database, and calendar. This was in an era where Microsoft Office didn’t exist and if you wanted these applications you had to buy them separately. GEOS was also used by AOL for the DOS version of their connection software.

Once Windows conquered the desktop and hardware caught up to Windows’ appetite, GEOS fell out of favor. GeoWorks ultimately sold out to NewDeal Inc, which tried to market the OS as a Windows alternative to those with older machines and for schools. When this didn’t work, NewDeal ultimately failed and sold its business to BreadBox who continue to make, support and update a version of GEOS called BreadBox Ensemble.

My copy of GEOS is long gone, but I ran it for a while on my Tandy 1000. It did the job, but I needed more power than what was in the supported applications and it didn’t run DOS programs very well.

All that and more

So there you have 5 of the best operating systems you probably never used. Each introduced innovations that we still use today, as well as some we’re still trying to catch up with even though the programs debuted in the 20th century. In each case, they were overlooked, underrated, and ultimately crushed by the Microsoft steamroller.

There are plenty of OSes I left off the list: CP/M, TRS-DOS, LDOS, DR-DOS and others (which I encourage you to remind me of.) We’ll try to cover those in the future as well.

The worst server room decisions ever made by management

An annual electrical maintenance resulted in a total shutdown of power for about eight hours on a Sunday. Unknown to IT, one of the departments actually had a staffer coming back every Sunday in order to do an online electronic filing of certain shipping documents.

Never mind that nobody in that department saw the notices of the impending power shutdown on every elevator door over the entire week, or noticed the company-wide e-mail blast. The lone staffer arrived as usual that fateful afternoon to a darkened office. Obviously, he failed to do the requisite filing, resulting in a compound fine being imposed on the company.

The company’s General Manager was upset and asked why the uninterrupted power supply (UPS) investment still resulted in non-functional servers. When it was pointed out that the power in the current UPS could only last the one dozen servers for between 15 to 20 minutes, the order was given (over my objections) to purchase sufficient UPS capacity to last through a “full day” of power outages.

Preliminary estimates with engineers from APC indicated that we needed two 42-U racks packed with UPS and extender batteries to be able to meet the desired runtime. The cost? $20,000.

The idea was given up only when I realized that a fully running server room without a powered air-con or ventilation is not a very good idea. To spare myself an urgent visit to Toni’s View from the Cubicle blog for tips on getting a new job, I doubled the estimate to accommodate the air-conditioning - at which point we also ran out of space in the server room. Thankfully, the directive was scrapped after that.

If you have to ask: yes, a diesel generator was totally out of the question since the server room was located squat in the middle of an office complex.

Refusal to buy server racks (Penny wise, pound foolish)

Unless you’ve been working in MNCs all your life, you’ve probably encountered this one before: Management refusing to purchase proper server racks.

Now, a certain reluctance to splurge a few multiples of grand on a high-end kit is understandable. But the situation becomes a little intolerable when we’re talking about just a couple of simple bare-bones 42-U racks costing less than a grand each - to house no less than a dozen and a half servers currently scattered all over.

Have you encountered a situation like this before?

In my case, I finally got my way in this scenario. But I wanted to hear from more of you who serve at the “front line,” however. How would you justify the value of proper server racking?

Splurging on the wrong things (Reacting from fear)

Just before I joined this particular company, one of the database servers suffered a serious hard disk error, resulting in a corrupted database. The near line backup was no good because its mediocre hard disk had long ran out of sufficient capacity for even one full backup.

We recovered the data for the most part. Due to the resulting anxiety, management wanted to replace two of the database servers with brand spanking new ones.

I had just joined the company, and the exact instruction given by my boss was: “Just go for the best. I’ll pay.” Now, you must understand that these database servers, though critical, were used by no more than five users each; there are so many cheaper methods to prevent a recurrence of the problem. I confess that I didn’t follow the instructions in the end. I quietly told the vendor to just give me something mid-end, and we ended up spending about $16,000 on two HP servers.

Still, the money spent could have been better used elsewhere- like replacing a couple of production servers that were more than eight years old and for which there is no functional equivalent in terms of hardware.

Uninstalling and disabling drivers in Windows Server 2003’s Device Manager

Last week, we went over rolling back drivers in order to undo updates. Still, occasionally you may need to update or uninstall a device driver that is no longer necessary or is not performing as expected within Windows Server 2003. This tip will go through the process of uninstalling and disabling device drivers.

As you may recall, updating a device driver requires that you download a driver before starting. It is recommended that you also view the details of both the existing driver (using the Driver Details button) and the newly downloaded driver (by unzipping the driver file, locating either the .dll or .sys file, and then right-clicking one of these files and choosing Properties). You can then proceed with the Update wizard that appears on the screen.

Uninstall Driver removes the current device driver and its device from the Windows Server 2003 system altogether. This can be useful if you are troubleshooting a device problem, allowing you to uninstall and reinstall the driver.

To uninstall a device driver, complete the following steps:

1. Open the Computer Management Console by right-clicking My Computer on the Start menu and selecting Manage.
2. Select Device Manager in the left pane of the console. The Device Manager will then display a list of installed devices in the right pane of the console.
3. Expand the category of the device you wish to uninstall.
4. Right-click the device and select Properties.
5. Select the Driver tab on the device’s Properties dialog box.
6. Click the Uninstall Driver button.
7. A new dialog box will pop up asking you to confirm the uninstall. Click OK to proceed.
8. Shut down the system to remove the device.

When uninstalling a device driver for plug-and-play devices, the device must be connected to the system. Windows Server 2003 manages most of these drivers dynamically.

In some cases, you may wish to determine how your system will function without a device — perhaps one that you no longer use. You can use the steps above to remove the driver for the device and test the system without the device.

Rather than removing a device to test the system operation, you can also disable it in the Device Manager to prevent the system from trying to start the device. Follow these steps:

1. In Device Manager, right-click the device you wish to disable.
2. Choose Disable from the Context menu.

You can then test the system with the hardware turned off to see how it will perform. If it meets your organization’s needs, you can then remove the device when it is convenient to turn off the Windows Server 2003 system.

Configure Windows Explorer to display Windows XP disk drives

When you double-click the My Computer icon in Windows XP, you see a list of all the drives on your hard disk. However, when you launch Windows Explorer, it displays the contents of My Documents in the right panel. If you like the way that the My Computer view displays all the disk drives when you first launch it, but prefer the Windows Explorer view, here’s how you can get the best of both views.

1.Right-click on the desktop.
2.Select New Shortcut.
3.Type C:\Windows\Explorer.exe /n, /e, /select, C:\ in the text box, then click Next.
4.Type My Explorer in the text box and click Finish.

Using the /Select switch with C:\ as the object causes Windows Explorer open a My Computer view of your system. Now, when you select your new shortcut, your window will look more like the My Computer view.

Can a CEO's terrible people skills affect company success?

They were all C-level executives at high-profile companies who lost their jobs due to interpersonal incompetence (aka “no people skills”).
(Julie Roehm was fired from Wal-Mart; Robert Nardelli was forced out of Home Depot; Steve Heyer was let go from Coca-Cola; and Harry Stonecipher lost his job at Boeing.)
In his Forbes blog last week, Dale Buss quotes Bob Eichinger, CEO of Lominger International, as saying such people are “promoted into their jobs for their business smarts, and they fail because of weaknesses in their people smarts.”
Well, no kidding.
As I think we’ve all learned from Donald Trump, many highly successful people rise to the top because they have some kind of genius for business. And many stay at the top in spite of a pronounced lack of interpersonal skills. As long as they’re making money, the stakeholders can overlook the trickle-down problems that affect the worker bees.
Until, that is, those trickle-down problems become so severe that they start to affect profits.
I shudder to think how bad things have to get for shareholders to tie in an executive’s interpersonal skills with a shrinking profit margin, but I’m encouraged by the fact that sometimes the connection is indeed recognized.