[plt-scheme] Perplexed Programmers
On Aug 25, 2007, at 1:47 PM, Anton van Straaten wrote:
> Richard Cleis wrote:
>> A tenth of a billion dollars was spent on a payroll system that
>> doesn't work because "complicated, varied job assignments and pay
>> scales have perplexed computer programmers."
>> http://www.latimes.com/news/local/la-me-
>> payroll25aug25,0,630079.story? track=mostviewed-storylevel
>> As computers and computer science mature, these stories (and my
>> own trivial experiences) get worse. What's going wrong?
>
> What's going wrong? I'd bet it's all embodied in that sentence
> "complicated, varied job assignments and pay scales". A highly
> likely translation of that is that as yet, there exists no
> specification that fully and accurately describes the required
> behavior of the system, including all the fudging and special-
> casing that the humans involved in the process rely on. Without
> such a specification, it is of course impossible to produce a
> working program to automate these processes.
>
> This is a pretty common situation: organizations that perform
> complex processes are not typically sufficiently "self-aware" to
> communicate the details of those processes comprehensively or with
> any precision. The processes work because of a network of
> adaptable humans who each have pieces of the puzzle. If one
> imagines an organization as a brain, the process-following behavior
> is essentially an unconscious one, akin to the classic example of a
> dog catching a ball.
This assumes that the system was never automated. I find that hard
to believe. I see your point though. Even if it were automated: if
hundreds of humans were 'gaming the system' to make it work, then the
specification process couldn't have been adequate unless developers
studied the human efforts very carefully. I know an inside story of
a *very* famous company that had such a situation. They were finally
burned when business was doing so well that human co-processing was
inadequate, so the software that ran the facility had to be fixed
under exceptional circumstances.
>
> Imagine a dog approaching a consulting firm to automate its ball-
> catching behavior. The dog can't tell the consultants the
> equations governing the motion of the ball, or how it responds to
> that motion. The consultants have to figure that out, and they
> have to do so from the "outside", since they don't do ball-catching
> themselves.
>
> There's no easy way to test specifications once they're developed,
> either. Perhaps the most reliable approach so far is to develop
> systems incrementally, but for some complex systems that's not very
> practical ("irreducable complexity").
If this is true, then why the 95M$ price tag? That's 2000 dollars
per case (48k "certified employees".) Obviously, this is an
exaggeration because the price tag includes equipment and training.
Am I expecting too much from development processes? As an engineer/
scientist, I am tempted to inexplicably use the square root of 48k as
an estimate of the number of independent problems (hundreds). This
is a formidable number, but if such problem-sets can't be solved
reliably then program development has a long way to go. I don't want
to believe that this is true.
>
> Related to all this is overambitiousness and the resulting
> overspecification. Both managers and programmers, when specifying
> and designing an automated system, have a tendency to want to
> automate it down to the last detail. Programmers want this for a
> variety of reasons which probably don't need to be explained here.
> Managers want it because of the wet dream of systems that minimize
> the human element, so that all you need are some cheap keypunching
> labor to keep the system running.
>
> But implicit in this overspecification is a tendency to
> underestimate the degree to which human-run processes adapt to
> changes as a matter of course. Again, this adaptation happens
> "unconsciously": individuals respond to special cases and do what's
> necessary to keep the process running. The organization as a whole
> isn't "aware" of all of these adaptations, e.g. they're not all
> documented (ISO 9001, anyone?) This adaptiveness is damaged when a
> rigid computerized system is imposed: essentially, the system's
> specification changes regularly, which leads to abuses of the
> system to work around the things it cannot handle, which begins a
> cycle of problems: now the group is working around the automated
> system, and you're dealing with something bigger than mere bugs in
> the software.
I repeat, then: for what is the 95M$ paying? An inadequate job can
be completed for far less money. I know of State development
processes that fail for the reasons you give, but the price is not
astronomical (including the cost of desperate attempts with
outsourcing.)
>
> This is the exact opposite of the socialware approach of exploiting
> group intelligence: it explicitly tries to take as many decisions
> as possible out of the hands of the group. The systems that fail
> are not as adaptable as the groups that they're supposed to semi-
> replace.
>
> All of the above is just one dimension of the problem. There are
> many others. An oft-discussed one which tends to come into play in
> this sort of development are the typical corporate approaches to
> project management, which don't apply well to software
> development. Stories about this abound on the web, e.g. Reg
> Braithwaite's recent article about software not being made of
> bricks: http://weblog.raganwald.com/2007/08/bricks.html
Hmmm. We have such stories, but I think we can make the typical
approaches work better... and I am a traitor to my colleagues for
saying so. :) Several improbable changes need to be made first,
though. For one thing, software development must be treated as
'solving problems,' as opposed to 'writing code.' This forces social
issues that you explain to be dealt with early in the project, and
they need to be given appropriate attention by management. Instead,
the non-code parts of problem solving don't march in the 'earned
value' parade, even though they can easily be the most important and
difficult part of the problem.
>
>> Oh, never mind. This forum is for cheerier topics... like what
>> kind of Mean Scheme Machine could be built for the 37M$ that will
>> be spent on fixing the 95M$ problem.
>
> The limitations of programming languages and software tools are
> another big factor. The response to this is a high-speed, never-
> ending cycle of "upgrades" which means that any chosen set of tools
> forms an incredibly unstable platform, and regular upgrades and the
> resulting ripple effects are all but essential.
>
> In any case, silver bullets don't help if you're shooting at the
> wrong monster.
>
> Anton