A
Alvin Bruney
Jon,
them standards of yours are just wayyyyyyyyyyyyyyyyyyyyyyyy to lofty for the
lowly.
The fundamental rule of performance, according to John Robin, is that the
code must be up and working correctly. Only then do you performance tune. or
call in the hotshots.
it? The bottom line is the user experience. So go after it where it makes a
big difference to the user. They are the ones buying the product. They
should be happy. Obviously, and i suspect this is rhetorical but i'll bite
anyway, a good performance tuner ignores the portions of code where tuning
will result in an incremental improvement. But you already knew that. I knew
it too so there really wasn't any need to bring it up. It's a given. A good
tuner fishes out her toolkit and sets about finding portions of the code
which are running slow relative to the user experience and makes a judgement
call on whether it's worth going after.
a performance junkie IS a good programmer, she already knew what to touch
and what to stay away from. Probably junkie is a bad word. It suggests
callous disregard for effects stemming from the cause. I hope you didn't
read it that way. how about performance enthusiast?
Skeet
that is tooooo lofty. Yes we are taught that in school, but the real world
operates at a tangent to these rules.
I know you aren't suggesting that you work this way because you would
absolutely discourage the mid-level programmer AND the small business owner
looking to employ said programmer.
Here is how it really works.
1 write code
2. make sure it's working
3. any combination of the following
a design
b unit what?
c. first pass/beta/final release wrapped all in one.
4. maybe, depending on time, document
I'm being facetious on the last part so don't get all bent out of shape. But
i venture this is really happening out there.
in short, a deligent professional always weighs the cost of benefit related
to the investment. said professional makes, hopefully, a sound decision
based on the return on the investment.
them standards of yours are just wayyyyyyyyyyyyyyyyyyyyyyyy to lofty for the
lowly.
a) By concentrating more on the design in the first place than the
details later on, they could probably have gained more performance
anyway.
The fundamental rule of performance, according to John Robin, is that the
code must be up and working correctly. Only then do you performance tune. or
call in the hotshots.
same as aboveb) By concentrating on simple, readable code instead of code which runs
very quickly, they're likely to have fewer bugs.
How do you determine that it's not needed? It's extremely subjective isn'tc) Going after performance where it's not needed is just a waste of
effort.
it? The bottom line is the user experience. So go after it where it makes a
big difference to the user. They are the ones buying the product. They
should be happy. Obviously, and i suspect this is rhetorical but i'll bite
anyway, a good performance tuner ignores the portions of code where tuning
will result in an incremental improvement. But you already knew that. I knew
it too so there really wasn't any need to bring it up. It's a given. A good
tuner fishes out her toolkit and sets about finding portions of the code
which are running slow relative to the user experience and makes a judgement
call on whether it's worth going after.
Isn't that a balance a good programmer strikes before she goes in? And sinceThat's not the way it happens though - people try to go after
performance when they really don't need to, and indeed shouldn't due to
the loss of readability which is often involved when going for really
fast code.
a performance junkie IS a good programmer, she already knew what to touch
and what to stay away from. Probably junkie is a bad word. It suggests
callous disregard for effects stemming from the cause. I hope you didn't
read it that way. how about performance enthusiast?
The order in which I'd develop in an ideal world would go something
like:
1) Write class specification, bearing in mind architectural
performance (often more of a system issue than an individual class
issue)
2) Write class documentation and empty method stubs
3) Write unit testing code
4) Write first pass simple implementation
5) Get that implementation working against all the tests
6) Run system tests (developed in parallel by a separate team,
probably) including performance measurements
7) If system doesn't perform adequately, isolate bottleneck and
optimise
Skeet
that is tooooo lofty. Yes we are taught that in school, but the real world
operates at a tangent to these rules.
I know you aren't suggesting that you work this way because you would
absolutely discourage the mid-level programmer AND the small business owner
looking to employ said programmer.
Here is how it really works.
1 write code
2. make sure it's working
3. any combination of the following
a design
b unit what?
c. first pass/beta/final release wrapped all in one.
4. maybe, depending on time, document
I'm being facetious on the last part so don't get all bent out of shape. But
i venture this is really happening out there.
in short, a deligent professional always weighs the cost of benefit related
to the investment. said professional makes, hopefully, a sound decision
based on the return on the investment.