Tag Results

March 22, 2006

The 2006 QSM Software Almanac – IT Metrics Edition, is here! It contains more than 100 pages of analysis and observations that provide unparalleled access to the latest developments in the software industry.

It’s with great pride that we’re announcing the Almanac here on the pages of Optimal Friction. My partners here at QSM have assembled overviews and in-depth analysis of more than 500 completed projects from all major industries, collected in the last 5 years. One can easily peruse the (sometimes surprising) qualities and characteristics of “best/worst in class” projects, with the attendant implications about core metrics tradeoffs. Best of all, it describes extensive actionable intelligence gathered over more than 25 years of consulting practice as revealed by the software industry’s most detailed and comprehensive database of completed projects using the analysis capabilities within the QSM SLIM Suite of tools.

Special thanks to Doug Putnam, Kate Armel, Don Beckett, and all on the QSM team. Readers of the Almanac will no doubt recognize the heritage of this work, tracing to Larry Putnam’s pioneering research on metrics for the software and Information Technology fields.

As the saying goes,

“without metrics, you’re just a person with another opinion,”

and the Almanac will, in easily understood, detailed, expert analysis, provide insight into the importance of collecting and analyzing core metrics, using history as a guide to the future. This volume makes a wonderful companion to the QSM SLIM Suite of tools for rapid, accurate benchmarking, estimating and “in flight” project control, right at the desktop. Users of SLIM will find the Almanac highly useful as benchmark data for software project estimates and productivity assessments.

No matter what your industry or corporate stovepipe, or the scope, schedule, staffing, technique or language used in your world: you will have a better grasp on industry trends that can will help improve your company’s project management and save time and money. For more information and ordering, please contact Sean Callaghan at QSM Associates 413-499-0988, ext 105. The cost is $500 with discounts on multiple volumes of 5 or more.

February 10, 2006

Last week, an excellent piece by my fellow Cutter Consortium colleague Ken Orr crossed my inbox. I decided to excerpt it here since it directly speaks to a subject near and dear to my heart: software complexity.

After reading this, you may want more access to insights from various Cutter authors like Tom DeMarco, Ed Yourdon, Tim Lister, Rob Austin and scores of other experts. We publish research on Agile methods, Outsourcing, Business Technology Trends, Benchmarking, and the like. Check out the Cutter Consortium website at www.cutter.com. Sign up for trial subscriptions by contacting us there. You’ll be plugged in to some real interesting stuff!

Complexity Doesn’t Scale

by Ken Orr, Fellow, Cutter Business Technology Council

Cutter Senior Consultant Tom Welsh asked in a recent Business Technology Trends Executive Update whether software development had gotten too complex. He asked for feedback, so here it is: the answer to Tom’s question is unquestionably “Yes,” software development has clearly become too complex! While it is true that the software that people use today is more sophisticated, at least at the user interface level, the complexity of software development has clearly spun out of control.

There are plenty of villains in this piece. There are the hardware and software vendors who have pushed new generations of user interface, operating systems, and programming languages while largely ignoring business analysis, requirements, and design. And there are the software developers who, until many of their jobs were swept away by outsourcing, were so enamored with the latest bells and whistles that they lost track of delivering high-quality, easy-to-maintain software.

There are also the software tool vendors who stopped working on Computer Aided Software Engineering (CASE) tools in favor of more and more complex development environments. As I work with clients around the world, I am amazed how complex their development environments are and how difficult it is to do the simplest things. In previous Advisors, I have commented on how difficult something as simple as deployment (of anything) is today.
This is a travesty! Deployment should be as easy as pushing a button. To use a tried and true object paradigm: software tool vendors ought to “hide” the complexity of deployment from the developers.

The same is true of security. I have watched development teams all over the world struggle to get their programs past operational security barriers long before they should have been worrying about fitting into role-based security or biometrics or anything like that.

Complexity is killing us and it doesn’t have to. We need to reduce the high-order complexity of our systems and our programs. But we can’t program our way out of complexity, we have to architect/design our way out! Recently, I read a long article on ways around the “buffer overflow problem,” which continues to allow hackers to break into our operating systems. We know how to solve the problem — make it a systems error with no return — but there are some presumed performance problems, so we add complex, tricky code to try to get rid of a design problem and instead confound our difficulties.

Complexity doesn’t scale! It’s much like Michael Jackson used to say of optimization, which is the reason most often given for complex design, “Don’t do it, and if you have to do it, don’t do it yet!” Every generation of new hardware often erases the need for the previous generation’s complex optimization schemes. But complexity, once loosed on the software world, is nearly impossible to take back, because, naturally enough, people take advantage of it.

In the end, nothing scales like elegant design. Despite nearly a generation of “denormalization” tricks, Codd’s rules of normalization still yield databases that are maximally flexible and consistent to update, where denormalized databases are enormously difficult to update or extend.

Every once in a while, I end up talking about the design of things like operating systems or teleprocessing or workflow management systems design, and people go off on how hard these tools are to use and to manage. Having worked in all of these areas, I can tell you that they don’t have to be so difficult or so hard to manage.
All of that is bad design — the result of having too many coders and too few designers and architects involved. The fact that more and more people have the term “architect” in their title doesn’t make them real product architects. The real architects fight complexity every day. They know that giving in to bad design makes every thing else really difficult.

— Ken Orr, Fellow, Cutter Business Technology Council
http://www.cutter.com/consultants/kobio.html

November 6, 2005

Recently, I had a wonderful dinner, one-on-one, with a mentor I am most fond of, Tom Demarco, at his lovely home on the coast of Maine. Yes, Tom’s of Maine (different Tom). When I visit Tom, I playfully call him “Godfather”.

DeMarco.jpg

Tom, as many people know, has written several fabulous books on technology. In fact, one of his first books, Controlling Software Projects, was among the first to describe the statistical existence of what Larry Putnam called “The Impossible Zone”. It is the absolute limit beyond which projects simply have not gone any faster. Tom described the first time he saw a graph of projects, where the edge of the zone – and beyond – was delineated. He came to the conclusion that at the time, he had spent most of his professional life living in the Impossible Zone.

I find that many senior managers will have none of this talk about projects being “impossible”. That would wake up their tired, huddled masses to the realities of unrealistic deadlines and corporate denial. So I don’t say that this area is the Impossible Zone.

I say that when you look at a graph of real, factual, historical data, the zone is simply – “Where No Project Has Ever Gone Before”, in Star Trek terms. In many of my speeches, I ask audiences, “How many of you are given a project deadline first, before anything else?” All the hands go up. I then ask, “If your deadlines were plotted on this chart, how many believe they might be in the Impossible Zone?” All the hands stay up. Tom still has a lot of company.

Lately it seems I’ve been consulting on more projects that are in time-trouble. It’s no surprise that according to a Standish Group Study published in the June 2003 issue of Computer, that as many as 80 percent are late and over budget, and that 40 percent are abandoned. (These figures are even worse than similar measures taken 10 years ago.)

My experience on this (since I’ve been consulting in the IT and software field for over 18 years), isn’t that projects as a whole aren’t exhibiting very high levels of productivity. They are! What we can do today VASTLY exceeds what was possible 10 years ago, and project data that my partners at QSM have been gathering proves it. It’s that the deadlines are getting even worse with each year. Just 2 weeks ago, I consulted to the management team of a very high-pressure, date-driven project. A forecast for the most likely estimate scenario was given to their BigCheese. BigCheese was not happy. He admonished the hapless manager reporting to him as follows: “The date is unacceptable. Change the date.”

To what, might I ask?

This organization is exhibiting defects at twice the industry average. My mentor Tom Demarco, in his book Slack, Getting Past Burnout, Busywork, and the Myth of Total Efficiency talked about projects working at breakneck speed. There’s a reason why they call it “breakneck”. See my Cutter Consortium article here.

My point to this diatribe is, that when teams attempt projects at breakneck speed, we can guarantee that high defects will be the case. Not just 50 percent more. It could be 200 percent more, like the company just described. It could be 500 percent more, which is the norm, when you try to compress the date by even as little as 20 percent. It’s non-linear, which is why as many as 40 percent of projects are cancelled, to the tune of $80 billion to $100 billion per year in write-downs and losses. This is costing us dearly.

I guess what it really comes down to is that high-defect, poor quality software in our industry, is these days more of a fait accompli. I would like that to change. I think about this when I am conscious of the amount of software that do things like run our medical systems, control our automobiles, manage the power grid, and fly our airplanes. Life in high technology isn’t just about the hands of the clock at the moment we turn on the systems that we’ve worked so hard to build.