Motivation of software professionals

  • Thread starter Thread starter Stefan Kiryazov
  • Start date Start date
James said:
The "standard" life of a railway locomotive is thirty or fourty
years. Some of the Paris suburbain trainsets go back to the
early 1970's, or earlier, and they're still running.

Do you happen to know if they've undergone any engineering changes over
those 40 years for safety or performance enhancements?

With worn/damaged parts replacement how much of the original equipment
remains? Wheel sets, motors, controls, seats, doors, couplers,
windshields, etc. all get inspected and replaced on schedule.

Not all locomotives last 40 years.

Design flaws can contribute to a shorter life. For example the Erie Triplex.
http://www.dself.dsl.pipex.com/MUSEUM/LOCOLOCO/triplex/triplex.htm

Although design flaws played a part in the death of the Jawn Henry, I've
heard that N&W's business was undergoing changes and undercut the
companies desire to invest in coal fired power.
http://www.dself.dsl.pipex.com/MUSEUM/LOCOLOCO/nwturbine/nflkturb.htm


To continue with our locomotives, the replacement of coal fired steam by
diesel and electric (No, no, not this one:
http://www.dself.dsl.pipex.com/MUSEUM/LOCOLOCO/swisselec/swisselc.htm ;)
) power was largely driven by maintenance cost, the sort that replaces
the lubricating oil, not the kind that replaces faulty brake systems,
although this played a role too. It's nice to be able to buy parts OTS
if you need them rather than have a huge work force ready to make parts.

I think ultimately the RRs asked themselves if they were in the
locomotive business or the transportation business.

LR
 
Seebs said:
They might be hard to apply, but consider that a great deal of free
software is written without idiots saying "you need to get this done sooner
so we can book revenue this quarter to please shareholders". It's also
often written by particularly good developers, who care about their code.
[...]

I'm not convinced that the majority of free software is of
particularly high quality. But I think that most free software
that's sufficiently popular that you or I have heard of it does
tend to be of high quality. There are (at least) two effects here:
good free software tends to become popular, and useful free software
attracts good developers. The latter effect is less pronounced
in non-free software; however much I might like some proprietary
software package, I'm not likely to switch jobs so I can work on it.

But if you looked at the universe of free software, I'd be surprised
if Sturgeon's Law didn't apply (90% of everything is crud).
 
I'm not convinced that the majority of free software is of
particularly high quality. But I think that most free software
that's sufficiently popular that you or I have heard of it does
tend to be of high quality. There are (at least) two effects here:
good free software tends to become popular, and useful free software
attracts good developers. The latter effect is less pronounced
in non-free software; however much I might like some proprietary
software package, I'm not likely to switch jobs so I can work on it.

But if you looked at the universe of free software, I'd be surprised
if Sturgeon's Law didn't apply (90% of everything is crud).

Sure.

But there's one other huge amplifying effect:

You can read the source so you *know* whether or not it's any good. That
helps a lot. The bad stuff tends to never go anywhere (see our spammer
from last fall with his Unix daemon utility), while the good stuff tends to
do quite well indeed (e.g., Rails).

-s
 
Sure.

But there's one other huge amplifying effect:

You can read the source so you *know* whether or not it's any good.  That
helps a lot.  

I think its helpful to be able to read code generated
by a compiler. I'm not talking about assembly
although that is helpful, but higher-level code.
In C++ due to a number of language features it's
easy to misunderstand what you are reading. If you
are having a problem and need to research the cause,
reading the later output can help detect the problem.

In my case, I have both open source code --
http://webEbenezer.net/build_integration.html --
and closed source code. The output from the closed
source code is also open source.


Brian Wood
http://webEbenezer.net
(651) 251-9384
 
Jerry said:
[ ... ]
Exactly. Engineering is about measurable outcomes, quantification.
What's the equivalent of "this building can withstand a quake of
magnitude 7.5 for 30 seconds" in software? Can any of us state "this
software will stand all virus attacks for 12 months" or "this software
will not crash for 2 years, and if it does your loss won't exceed 20% of
all digital assets managed by it" ?

Your analogy is fatally flawed, in quite a number of ways.

First of all, a particular piece of software is only one component in
a much larger system of both hardware and software -- where the final
system is generally designed and assembled by a somebody who's not an
engineer at all. What you're asking for isn't like a warranty on a
building. It's more like asking a vendor of steel beams to warrant
that any possible building of any design will withstand earthquake X
as long as it includes this particular component.
[ SNIP ]

And to continue the analogy, what would be reasonable to ask for is that
the steel beam vendor warrant his steel beams provided that they are
properly used according to his specifications. We can actually do that
for software components as well.

AHS
 
Seebs said:
Sure.

But there's one other huge amplifying effect:

You can read the source so you *know* whether or not it's any good. That
helps a lot. The bad stuff tends to never go anywhere (see our spammer
from last fall with his Unix daemon utility), while the good stuff tends to
do quite well indeed (e.g., Rails).

-s

*In theory* you can read the source. However, not many professional
developers actually have the time to assess open source code quality by
doing code inspections. I myself tend to go with reviews, previous
experience of software by the same people, experience of older versions
of the same program, and the provided documentation.

And I've used a number of programs for which the source was available
where problems caused us to dive into the code. The code passed visual
inspection, no problem...but it still had defects.

AHS
 
*In theory* you can read the source. However, not many professional
developers actually have the time to assess open source code quality by
doing code inspections. I myself tend to go with reviews, previous
experience of software by the same people, experience of older versions
of the same program, and the provided documentation.

I do too, but the moment I have to look at something, I can start evaluating
it. I pitched a several-week project to management on the basis that I'd
read the code of a component we were using, and concluded from quality
issues where the code worked but wasn't pretty that it would not be worth it
to try to fix the cases where it didn't work.
And I've used a number of programs for which the source was available
where problems caused us to dive into the code. The code passed visual
inspection, no problem...but it still had defects.

Oh, sure. Nearly all code still has defects. The questions that are more
interesting are how easy it will be to work on the code, or how likely it
will be that fixing one defect will reveal others.

-s
 
Nonsense.  Free software has a much higher rate of adoption of best
practices for high quality than for-pay software does.

You say so, too.  It's the "logically" with which I take issue.  That
free software uses the best techniques and has the highest quality in
the marketplace is entirely logical, in addition to being an observed
fact.  You just have to avoid false assumptions and fallacies in reasoning.

Not sure what you mean. There are no such logical binary connection.
Opposite is as easy to observe.

Just download few C++ code-bases at random from places like
sourceforge.net and review them. One produced by using good techniques
is really hard to find there. Most code there has quality so low that
it would be unthinkable in professional software house to pass QA peer
review with it. It is easy to logically explain since most of it is
hobby of non-professionals who find software development amusing or
professionals of other language who learn C++ as hobby.

Results are slightly better with larger and more popular open source
products but that is often thanks to huge tester and developer base
and not good techniques used.

In best shape are open source projects that are popular and where
commercial companies are actively participating since they need these
for building or supporting their commercial products. Again it is easy
to see how the companies are actually enforcing the techniques and
quality there and it is likely that the companies use even higher
standards in-house.

Worst what i have seen is the code written by in-house software
department of some smaller non-software companies but that is again
easy to explain by workers of that department obfuscating their work
to gain job security.

So all things have logical explanations and there are no silly binary
connections like free = quality and commercial = lack of quality.
 
Most software programs I have to work
with do not have show stopper bugs, and realistically do not need to be
"recalled".
I'm using Matlab at the moment. It seems to crash about once every two
days. A bug in a C subroutine will also take down the whole system.
That's irritating but liveable with. However what if the results of my
scientifc programs have small errors in them? No one will die, but
false information may get into the scientific literature. If I was
using it for engineering calculations, someone might die. However the
chance of a bug in my bespoke Matlab code is probably orders or
magnitude greater than a bug in matlab's routines themselves. So does
it really matter?
 
Malcolm said:
I'm using Matlab at the moment. It seems to crash about once every two
days. A bug in a C subroutine will also take down the whole system.
That's irritating but liveable with. However what if the results of my
scientifc programs have small errors in them? No one will die, but
false information may get into the scientific literature. If I was
using it for engineering calculations, someone might die. However the
chance of a bug in my bespoke Matlab code is probably orders or
magnitude greater than a bug in matlab's routines themselves. So does
it really matter?

This is a really good point. I've worked with programs that deal with
health information, others that deal with vital statistics
(birth/death/marriage etc), others that deal with other public records
(like driver licensing and vehicle registration). I've also spent quite
a few years (not recently) writing programs that manipulate scientific
data (primarily oceanographic data).

In the case of the latter (the oceanographic data), naive acceptance of
my program output by a researcher might have led to professional
embarrassment, or in the worst case it could have skewed public policy
related to fisheries management or climate science. Realistically though
we had so many checks at all stages that the chances of errors were minute.

Data integrity defects, or the hint thereof, in the driver's
license/vehicle registration/vital statistics applications certainly
cause(d) a lot of sleepless nights, but the effects here are individual,
and depending on who you are talking about tend to confine themselves to
career damage and again embarrassment, and some wasted time and money,
but rarely anything truly serious.

Data integrity errors in health information are ungood. Period.

In the first case (oceanographic data processing) occasional crashes of
programs were irrelevant. All the software was custom, and it's not like
it had to be running 24/7. In the case of the motor vehicle registry
systems, or the vital statistics systems, it does need to be running
24/7 (e.g. the police need to run plates at 3 AM as much as they do at 3
PM), but ultimately a crash is still only an embarrassment. In the case
of health information (e.g. a system that paramedics can use to query
MedicAlert data while on a call), a crash is unacceptable.

Depending on the application you can describe the impact of both data
integrity errors, and downtime due to crashes (or extreme sluggishness).
All this stuff really needs to be part of system requirements (not just
software requirements). I've noticed too that many programmers tend to
concentrate on issues like uptime, performance and business logic, but
completely or substantially ignore matters like encodings, data type
conversions, data value sanity checking and invariants, to mention a
few. IOW, they do not have a good understanding of the data.

Ultimately what is tolerable obviously varies, both with respect to
system availability and also with respect to data integrity. If we have
defects then we are guaranteed to have defects in both areas. Personally
I believe that with respect to the latter - data integrity - that
software developers at large could learn a lot from scientific
programmers, but good luck with that one.

AHS
 
[ SNIP ]
There used to be a lot less free stuff available, and it was
worse. (It doesn't make sense to me, either, but those are
the facts.)
Clearly. The problem is that most commercial firms don't do
that.
Right, and that's because usually the _only_ difference
between free and commercial software right now is the price.
Paid-for software doesn't come with any more guarantees or
support than the free stuff does; in most cases you actually
have to pay extra for support packages.
In effect the commercial software is also crappy because we do
not hold it to a higher standard. I believe that a
well-thought-out system of software certifications and hence
guarantees/warranties will lead to a saner market where the
quality of a product is generally reflected in its cost.

I think you're maybe confusing cause and means. I'm not
convinced that certification of professionals is necessary; I am
convinced that some "implicit" warrenties are necessary, and
that if an editor trashes my hard disk, the vendor of the editor
should be legally responsible.

Certification, in practice, only helps if 1) the vendor is
required to use only certified people in the development
process, 2) the certification really does verify ability in some
way, and 3) the vendor allows the certified people to do things
in the way they know is correct. In practice, I don't think 1
and 3 are likely, and in practice, there are plenty of capable
people around today, without certification, who would do a very
good job if the vendors would ask them to do it, and structure
their organization so they can. I've worked in places where
we've produced code with quality guarantees, and where we've
produced we've produced code which met those guarantees. And
the people there weren't any more (or less) qualified than the
people I've seen elsewhere. The problem isn't the competence of
the practitioners (which is the problem certification
addresses), but the organizations in which they work.
Someone with a "glass is half-empty" perspective on this might
bemoan the fact that the higher cost would be all about
absorbing the cost of recalls and lawsuits and what not; I
think the other view, that the higher cost reflects the higher
quality, and that you will expect _fewer_ recalls and
lawsuits, is just as valid, if not more so.

The lawsuits are going to come. The insurance companies are
convinced of it, which is why liability insurance for a
contractor is so expensive (or contains exclusion clauses,
because the insurer doesn't know how to estimate the risk).
 
Nonsense. Free software has a much higher rate of adoption of
best practices for high quality than for-pay software does.

Really. I've not seen any free software which adopted all of
the best practices. In my experience, some of the best
practices require physical presense, with all of the developers
having offices in the same building. (The experiments I've seen
replacing this with email and chat haven't turned out all that
well.) This is far more difficult for a free project to achieve
than for a commercial one.
You say so, too.

What I said is that apparently, many commercial shops don't take
advantage of their advantages. For example, one of the key
factors in developing high quality software is communication.
And communication is, or should be, easier when everyone works
in the same plant. Never the less, one continually hears
stories about lack of communication in such cases; about
internal competition even leading to misinformation. The
potential in a commercial organization is higher, but it's clear
that many such organizations aren't using it (and that a few
free projects are using everything they can).
It's the "logically" with which I take issue. That free
software uses the best techniques and has the highest quality
in the marketplace is entirely logical, in addition to being
an observed fact. You just have to avoid false assumptions
and fallacies in reasoning.

First, free software doesn't have the highest quality. When
quality is really, really important (in critical systems), you
won't see any free software. I'm certain that no free software
project is certified at SEI level 5, and from what I've seen,
very few reach SEI level 2. Some commercial organizations (one
or two) are certified at SEI level 5, and I've worked for some
that were around level 3. Most of the ones selling the software
we usually use (e.g. Microsoft, Sun, etc.) are still at level 1,
however.
ClearCase is an unwieldy pig. You hold that up as an example of high
quality?

ClearCase uses a different model than any of the other version
management tools I've used. In particular, the model is
designed for large projects in a well run shop---if your
organization isn't up to par, or if your projects are basically
small (just a couple of people, say up to five), ClearCase is
overkill, and probably not appropriate. If you're managing a
project with five or six teams of four or five people each, each
one working on different (but dependent) parts of the project,
and you're managing things correctly, the ClearCase model beats
the others hands down.

My statement wasn't really clear, however: it's the ClearCase
model which makes it the best choice in such cases, not the
quality of the software. I've no reason to belive that
ClearCase is developed using a better methodology than anything
else.
Admittedly, it's better than a lot of other version-control
products, but not nearly as good as the free ones.

As one of the free ones, in terms of quality, perhaps. The
model is different, so it's very difficult to compare. In cases
where the ClearCase model is preferrable, ClearCase is stable
enough that you're better off using it than something supporting
a different model.
No, they're not the facts. Since the beginning of free
software, much of it has been very high quality. I doubt very
much that the ratios have changed much, or if they have,
perhaps you could substantiate your "facts".

Did you actually try using any free software back in the early
1990's? Neither Linux nor g++ were even usable, and emacs (by
far the highest quality free software), it was touch and go, and
depended on the version. Back then, the free software community
was very much a lot of hackers, doing whatever they felt like,
with no control. Whereas all of the successful free software
projects today have some sort of central management, ensuring
certain minimum standards.
I don't dispute that there used to be a lot less free
software, insofar as there used to be a lot less software of
any description. It's your undefined use of "worse" without
evidence that I dispute.

I was there. For the most part, free software was a joke.
A mistake in a car model enough to effect a recall affects
every instance of that model. Bugs in software, OTOH, affect
only a small subset of the total production of software.

I'll admit that that paragraph is just speculation on my part.
And it's speculation with regards to the motivation for not
providing guarantees: the real issues are far more complex.
You haven't been paying much attention to the news lately,
have you?

The percentage of Toyota's production which is affected is
considerably smaller than what would happen if Microsoft were
required to recall Windows.

On the other hand, of course, software allows user installable
patches (what Microsoft does when there is a critical bug),
where as with a car, you generally have to bring it into the
shop, at much greater cost to the manufacturer (and to you).
 
The problem isn't the competence of
the practitioners (which is the problem certification
addresses), but the organizations in which they work.
Also the problem itself. It is impossible to test MiniBasic on all
possible different paths all possible scripts could take it through,
for example. (I wrote little test scripts to test each statement
individually when developing it). On the other hand a game like
"Defender" has a very limited set of user inputs.
 
Nonsense. Free software has a much higher rate of adoption
of best practices for high quality than for-pay software
does.
You say so, too. It's the "logically" with which I take
issue. That free software uses the best techniques and has
the highest quality in the marketplace is entirely logical,
in addition to being an observed fact. You just have to
avoid false assumptions and fallacies in reasoning.
[/QUOTE]
Not sure what you mean. There are no such logical binary
connection. Opposite is as easy to observe.
Just download few C++ code-bases at random from places like
sourceforge.net and review them.

I'm not sure that that's significant. It's less expensive to
publish free software, so you get a lot of idiots doing it. But
these are mostly products that no one is interested in. And
there are actually quite a few start-up companies which do
exactly the same thing. The difference is that the start-up
company will go out of business, and disappear, where as the
code on SourceForge just sits there.

If you're talking about successful projects, there are some good
free ones.
One produced by using good techniques is really hard to find
there. Most code there has quality so low that it would be
unthinkable in professional software house to pass QA peer
review with it.

I've had the chance of working mostly in well run shops, but
I've seen places where there was no peer review. Not all
commercial shops are better.
It is easy to logically explain since most of it is hobby of
non-professionals who find software development amusing or
professionals of other language who learn C++ as hobby.
Results are slightly better with larger and more popular open
source products but that is often thanks to huge tester and
developer base and not good techniques used.

At least some of the larger open source projects have a steering
committee, and do practice at least some sort of code review and
regression testing.
In best shape are open source projects that are popular and
where commercial companies are actively participating since
they need these for building or supporting their commercial
products. Again it is easy to see how the companies are
actually enforcing the techniques and quality there and it is
likely that the companies use even higher standards in-house.
Worst what i have seen is the code written by in-house
software department of some smaller non-software companies but
that is again easy to explain by workers of that department
obfuscating their work to gain job security.
So all things have logical explanations and there are no silly
binary connections like free = quality and commercial = lack
of quality.

Quality is largely determined by the development process. Some
of the better processes probably can't be applied to a free
project, at least not easily. But a lot of commercial projects
aren't applying even a minimum, and some of the better freeware
projects have applied some of the easier and more obvious
techniques. In the end, if you want quality, you have to
consider the process used to develop the software, independently
of the costs.
 
They might be hard to apply, but consider that a great deal of
free software is written without idiots saying "you need to
get this done sooner so we can book revenue this quarter to
please shareholders".

If your point is that some (most?) commercial vendors don't have
a good development process, I already pointed that out.
It's also often written by particularly good developers, who
care about their code.

That I don't believe. I've seen a lot of particularly good
developers in industry as well. People who care about their
code---in fact, one of the most important things in creating a
good process is to get people to care about their code.
It is also probably an influence that free software writers
expect the code itself to get feedback, not just the behavior
of the application. I have submitted bug reports about
poorly-expressed code, not just about code which didn't work.

In a well run development process, such feedback is guaranteed,
not just "expected". That's what code reviews are for.
Again, I don't think there's actually any force driving that.
The benefits of well-written software are significant enough
that it is likely worth it to some people to improve software
they have access to, and if it's worth it to them to do that,
it costs them virtually nothing to release the improvements.
Free software often ends up with the best efforts of hundreds
of skilled programmers, with active filtering in place to keep
badly-written code from sneaking in.

I'm far from sure about the "often", and I have serious doubts
about "hundreds"---you don't want hundreds of cooks spoiling the
broth---but that's more or less the case for the best run
freeware projects. Which is no different from the best run
commercial organizations, with the difference that the
commercial organization has more power to enforce the rules it
sets.
If you are implying that CC is actually usable to you, that
marks a first in my experience. No one else I've known has
ever found it preferable to any of the open source tools, of
which git is probably currently the most elegant.

ClearCase is by far the best version management system for
large, well run projects. It's a bit overkill for smaller
things, and it causes no end of problems if the project isn't
correctly managed (but what doesn't), but for any project over
about five or six people, I'd rather use ClearCase than anything
else.
Another issue is that, if you give away open source software,
people can modify it. If you modify my code, and your
modification is not itself buggy, and my code is not itself
buggy, but your modification causes some part of my code not
to work as expected, whose fault is that? This kind of thing
is a lot more complicated with code than it is with physical
objects. You don't have a million people using a bridge, and
a couple hundred thousand of them are using the bridge
recompiled for sports cars, and another couple hundred
thousand are running it with a third-party tollbooth
extension.

That is, of course, a weakness of free software. A company
using it, however, should be able to manage this (although I
once worked for a company where one employee would slip
modifications into the g++ we were trying to use for production
code, without telling anyone).
 
[I really shouldn't have said "most" in the above. "Some"
would be more appropriate, because there are a lot of
techniques which can be applied to free development.]
I'm not sure what you are referring to, but one thing we
agree is important to software quality is code reviewing.
That can be done in a small company and I'm sometimes
given feedback on code in newsgroups and email.

To be really effective, design and code review requires a
physical meeting. Depending on the organization of the project,
such physical meetings are more or less difficult.

Code review is *not* just some other programmer happening to
read your code by chance, and making some random comments on
it. Code review involves discussion. Discussion works best
face to face. (I've often wondered if you couldn't get similar
results using teleconferencing and emacs's make-frame-on-display
function, so that people at the remote site can edit with you.
But I've never seen it even tried. And I note that where I
work, we develop at two main sites, one in the US, and one in
London, we make extensive use of teleconferencing, and the
company still spends a fortune sending people from one site to
the other, because even teleconferencing isn't as good as face
to face.)
Maybe now that Sun CC and VC++ are free they'll improve. :)

I doubt it. Making something free doesn't change your
development process. (On the other hand, if it increases the
number of users, and thus your user feedback, it may help. But
I don't think any quality problems with VC++ can be attributed
to a lack of users.)
I'm not sure about Sun CC, but guess that it is free with
Open Solaris. Still I'm not comfortable with g++'s foundation.
I would like to think that VC++, written mostly in C++, is at
least able to produce a draw when up against g++.

There are a lot of factors which affect quality, but the basic
development process is by far the most important one. And from
what I've seen, I'd guess that Microsoft doesn't have a
particularly good process. Note that it's a lot easier to have
a good process when relatively few people are involved. Which
works against Microsoft, and also to a degree against g++. And
may contribute to explaining why the EDG front-end is so good
(along with the fact that it's probably easier to find four
exceptional people than to find 400).

[...]
That may be a reason why an on line approach makes sense.
Since you haven't shipped out instances of the program,
just make sure the instances that exist on your servers
are corrected. The other way, a court in a distant country
might hold you liable if some customers didn't receive a
message that they should update their copy.

Who knows what a court in a distant country may decide. (Note
that Microsoft now uses the push model for patches---by default,
automatic upgrading is activated, and you get all of the latest
patches for Windows, whether you asked for them or not.)
 
James Kanze wrote: [...]
The "standard" life of a railway locomotive is thirty or fourty
years. Some of the Paris suburbain trainsets go back to the
early 1970's, or earlier, and they're still running.
Do you happen to know if they've undergone any engineering
changes over those 40 years for safety or performance
enhancements?

Engineering changes, I don't know; I think in many cases, no.
(The "petit gris" commuter equipment in the Paris area certainly
hasn't changed much since its introduction.) But they are
maintained, with regular check-ups, replacement of worn parts,
etc., and if there were a safety defect, it would be corrected.
With worn/damaged parts replacement how much of the original
equipment remains? Wheel sets, motors, controls, seats,
doors, couplers, windshields, etc. all get inspected and
replaced on schedule.

Certainly. Hardware wears out. Even on your car, you'll
replace the brake pads from time to time (I hope). In the case
of locomotives, a lot more gets changed. But for the most part,
it's a case of replacing a standard component with a new, but
otherwise identical, component.

Not that that was my point. My point was that any embedded
software they're using was written before 1975 (more or
less---in the case of the "petit gris", before 1965, when the
first deliveries took place).

(The "petit gris" are the Z 5300 "automotrices" used by the
French railways in suburban service. They're very well known to
anyone commuting in the Paris area. I'm not aware of any
information about them in English, but
http://fr.wikipedia.org/wiki/Z_5300 has some information in
French, for those who can read French and are interested. The
main point is that they were put into service starting in 1965,
and are still in service, without any real changes, today.)
Not all locomotives last 40 years.
Design flaws can contribute to a shorter life. For example the
Erie
Triplex.http://www.dself.dsl.pipex.com/MUSEUM/LOCOLOCO/triplex/triplex.htm

Certainly, and others might last longer. (But somehow, I doubt
that the Erie Triplex had any embedded software, that could have
failed if the locomotive had still been in use in the year
2000.)
Although design flaws played a part in the death of the Jawn Henry, I've
heard that N&W's business was undergoing changes and undercut the
companies desire to invest in coal fired power.http://www.dself.dsl.pipex.com/MUSEUM/LOCOLOCO/nwturbine/nflkturb.htm
To continue with our locomotives, the replacement of coal
fired steam by diesel and electric (No, no, not this
one:http://www.dself.dsl.pipex.com/MUSEUM/LOCOLOCO/swisselec/swisselc.htm;))
power was largely driven by maintenance cost, the sort that
replaces the lubricating oil, not the kind that replaces
faulty brake systems, although this played a role too. It's
nice to be able to buy parts OTS if you need them rather than
have a huge work force ready to make parts.

Yes. But that's not really the issue here. I'm not sure when
the Swiss started using regenerative braking on the Gotthard
line, but when they did, they obviously had to retrofit a number
of locomotives in order for it to work. But that doesn't mean
that the original locomotives weren't designed with the idea
that they'd be used 40 years; it doesn't necessarily mean that
all of the programs embedded in them were replaced (although I
think that a move to regenerative braking might affect most of
them).
 
James said:
[ SNIP ]
I think you'd find that if there was much less free stuff
available that we'd have a _different_ economic model, not
necessarily a _worse_ one.
There used to be a lot less free stuff available, and it was
worse. (It doesn't make sense to me, either, but those are
the facts.)
I look at warranties differently than you do. To me a
warranty means that I used proper development practices, I
can make informed statements that the published software is
actually fit for a stated use, and that I care enough about
the program to offer some support.
Clearly. The problem is that most commercial firms don't do
that.
Right, and that's because usually the _only_ difference
between free and commercial software right now is the price.
Paid-for software doesn't come with any more guarantees or
support than the free stuff does; in most cases you actually
have to pay extra for support packages.
In effect the commercial software is also crappy because we do
not hold it to a higher standard. I believe that a
well-thought-out system of software certifications and hence
guarantees/warranties will lead to a saner market where the
quality of a product is generally reflected in its cost.

I think you're maybe confusing cause and means. I'm not
convinced that certification of professionals is necessary; I am
convinced that some "implicit" warrenties are necessary, and
that if an editor trashes my hard disk, the vendor of the editor
should be legally responsible.

Certification, in practice, only helps if 1) the vendor is
required to use only certified people in the development
process, 2) the certification really does verify ability in some
way, and 3) the vendor allows the certified people to do things
in the way they know is correct. In practice, I don't think 1
and 3 are likely, and in practice, there are plenty of capable
people around today, without certification, who would do a very
good job if the vendors would ask them to do it, and structure
their organization so they can. I've worked in places where
we've produced code with quality guarantees, and where we've
produced we've produced code which met those guarantees. And
the people there weren't any more (or less) qualified than the
people I've seen elsewhere. The problem isn't the competence of
the practitioners (which is the problem certification
addresses), but the organizations in which they work.

This is all true, and IMO you can only make all of that happen if we
have true professionals. There is however more needed in order to tie it
all together, and you've touched upon it. For certain types of work -
taxpayer-funded for starters - it would not be permitted to use
non-professionals. Given that, and the fact that professionals have a
duty to do proper work, no PM would be able to legally go against the
advice of his developers when the latter deliver it as a professional
recommendation.

But I agree with you, I don't think 1 or 3 are likely. Hell, I don't
think 2 is likely either.
 
Do you happen to know if they've undergone any engineering changes over
those 40 years for safety or performance enhancements?
Some have gone on a *lot* longer: the youngest engine on the Darjeeling
Railway was built in 1925 and I remember seeing a brass plate on one
saying that it was built in Glasgow in 1904. More recently, one was
converted from coal to oil and fitted with a diesel generator to run the
new electric water feed system and a diesel compressor for braking.
Details are here:
http://en.wikipedia.org/wiki/Darjeeling_Himalayan_Railway
 
James said:
James Kanze wrote:
On 12 Feb, 22:37, Arved Sandstrom <[email protected]> wrote:
I think you'd find that if there was much less free stuff available
that we'd have a _different_ economic model, not necessarily a
_worse_ one.
There used to be a lot less free stuff available, and it was worse.
(It doesn't make sense to me, either, but those are the facts.)
I look at warranties differently than you do. To me a warranty means
that I used proper development practices, I can make informed
statements that the published software is actually fit for a stated
use, and that I care enough about the program to offer some support.
Clearly. The problem is that most commercial firms don't do that.
Right, and that's because usually the _only_ difference between free
and commercial software right now is the price. Paid-for software
doesn't come with any more guarantees or support than the free stuff
does; in most cases you actually have to pay extra for support
packages.
In effect the commercial software is also crappy because we do not
hold it to a higher standard. I believe that a well-thought-out system
of software certifications and hence guarantees/warranties will lead
to a saner market where the quality of a product is generally
reflected in its cost.

I think you're maybe confusing cause and means. I'm not convinced that
certification of professionals is necessary; I am convinced that some
"implicit" warrenties are necessary, and that if an editor trashes my
hard disk, the vendor of the editor should be legally responsible.

Certification, in practice, only helps if 1) the vendor is required to
use only certified people in the development process, 2) the
certification really does verify ability in some way, and 3) the vendor
allows the certified people to do things in the way they know is
correct. In practice, I don't think 1 and 3 are likely, and in
practice, there are plenty of capable people around today, without
certification, who would do a very good job if the vendors would ask
them to do it, and structure their organization so they can. I've
worked in places where we've produced code with quality guarantees, and
where we've produced we've produced code which met those guarantees.
And the people there weren't any more (or less) qualified than the
people I've seen elsewhere. The problem isn't the competence of the
practitioners (which is the problem certification addresses), but the
organizations in which they work.

This is all true, and IMO you can only make all of that happen if we
have true professionals. There is however more needed in order to tie it
all together, and you've touched upon it. For certain types of work -
taxpayer-funded for starters - it would not be permitted to use
non-professionals. Given that, and the fact that professionals have a
duty to do proper work, no PM would be able to legally go against the
advice of his developers when the latter deliver it as a professional
recommendation.
That would never happen in the British civil service: the higher
management grades would feel threatened by such an arrangement. They'd
use sabotage and play politics until the idea was scrapped. I bet the
same would happen in the US Govt too.
 
Back
Top