Dave Brueck
2003-01-30 20:57:34 UTC
I'm used to the compiler/linker giving me a lot of reassurance that my
code works.
That's common, but in many cases it's a _false_ sense of reassurance.code works.
Even in C++ I end up checking all uses of a refactored function despite
what the compiler says because it can catch only the simplest of problems.
The more nasty problem is changing a function to have, say, an extra
argument. In C++ I'd just run the code through the compiler and it would
tell me all the places where the code needs updating.
Not necessarily; consider what happens when you change an API that hasargument. In C++ I'd just run the code through the compiler and it would
tell me all the places where the code needs updating.
default arguments - your compiler may or may not catch bugs that got
introduced. Here's a contrived example of changing this:
void Foo(int x, int y, int alpha=0)
to this:
void Foo(int x, int y, int z, int alpha=0)
Old calls like Foo(x,y) are flagged by the compiler, but old calls like
Foo(x,y,alpha) are not, and are bugs.
In python I'd have to trust grep (and follow up all the cases where the
function was assigned a new name, passed to a callback interface etc.
and/or where the arguments are built somewhere else and just passed in
via "apply").
In reality, in C++ _and_ Python you'd trust your test cases. :)function was assigned a new name, passed to a callback interface etc.
and/or where the arguments are built somewhere else and just passed in
via "apply").
As far as I can tell, the only way to have the same confidence in ones
code as you'd get with C++ (albeit using code that might take you a lot
longer to write), is to have unit tests that exercise absolute 100% code
coverage. Which is dauntingly difficult to create even in a more static
language like C++.
The law of diminishing returns applies - even if you can't get 100% codecode as you'd get with C++ (albeit using code that might take you a lot
longer to write), is to have unit tests that exercise absolute 100% code
coverage. Which is dauntingly difficult to create even in a more static
language like C++.
coverage you can get the lion's share of the bugs by focusing on the more
complicated parts of the code, or on parts that have been modified the
most or most recently. Having said that though, it's tough to sleep well
at night when there's lots of holes in your testing, regardless of
language used. :)
However there are large programs out there written in python, so it
obviously can be done.
This discussion comes up a lot, and one aspect of it that we consistentlyobviously can be done.
fail to overlook is that successful large programs have different higher
levels of coupling than small ones. The "can't use Python for large
projects" logic usually goes something like this:
"In a small program, two different functions can call function X, so
without development-type arg and type checking I have to check those two
functions if I change X. Large programs have lots of functions, so I would
have to check and possibly change a million different places. Therefore,
it's not feasible to use Python on a large project because it would take
too much maintenance and risk of breaking things is too high."
In truth, though, in a large project every function _won't_ be allowed to
call any other function, and it's usually through pretty well-defined and
limited interfaces that inter-module "communication" takes place, so
the vast majority of bug fixing, refactoring, etc. code changes are quite
local, or your system is suffering from poor design.
In the cases where refactoring of those more public interfaces is
necessary, then, yes, you do have to do a lot of work to check to make
sure nothing gets broken, but it's certainly no more work than a similar
change in a large C++ program (and that's why a lot more thought goes into
designing those interfaces in the first place).
-Dave