Discussion:
Python syntax in Lisp and Scheme
(too old to reply)
Kenny Tilton
2003-10-03 13:52:07 UTC
Permalink
It's be interesting to know where people got the idea of learning
Scheme/LISP from (apart from compulsory university courses)?
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
That recently got repotted from another cliki and it's a little mangled,
but until after ILC2003 I am a little too swamped to clean it up.
Me and my big mouth. Now that I have adevrtised the survey far and wide,
and revisited it and seen up close the storm damage, sh*t, there goes
the morning. :) Well, I needed a break from RoboCells:

http://sourceforge.net/projects/robocells/

I am going to do what I can to fix up at least the road categorization,
and a quick glance revealed some great new entries, two that belong in
my Top Ten (with apologies to those getting bumped).

kenny
Bengt Richter
2003-10-08 22:12:31 UTC
Permalink
You know I think that this thread has so far set a comp.lang.* record
for civilitiy in the face of a massively cross-posted language
comparison thread. I was even wondering if it was going to die a quiet
death, too.
Ah well, We all knew it was too good to last. Have at it, lads!
Common Lisp is an ugly language that is impossible to understand with
crufty semantics
Scheme is only used by ivory-tower academics and is irerelevant to
real world programming
Python is a religion that worships at the feet of Guido vanRossum
combining the syntactic flaws of lisp with a bad case of feeping
creaturisms taken from languages more civilized than itself
There. Is everyone pissed off now?
No, that seems about right.
LOL ;-)

Regards,
Bengt Richter
Thomas F. Burdick
2003-10-08 06:24:35 UTC
Permalink
In article <xcvpth8rcfh.fsf at famine.ocf.berkeley.edu>,
I find the Lisp syntax hardly readable when everything looks alike,
mostly words and parentheses, and when every level of nesting requires
parens. I understand that it's easier to work with by macros, but it's
harder to work with by humans like I.
You find delimited words more difficult than symbols? For literate
people who use alphabet-based languages, I find this highly suspect.
Maybe readers of only ideogram languages might have different
preferences, but we are writing in English here...
well, there are a few occasions where symbols are preferrable. just
imagine mathematics with words only
Oh, certainly. Unlike most languages, Lisp lets you use symbols for
your own names (which is easily abused, but not very often). A bad
example:

;; Lets you swear in your source code, cartoonishly
(define-symbol-macro $%^&!
(error "Aw, $%^&! Something went wrong..."))

;; An example use
(defun foo (...)
(cond
...
(t $%^&!)))

And, although you generally use symbols from the KEYWORD package for
keyword arguments, you don't have to, and they don't have to be words:

(defgeneric convert-object (object new-type)
(:documentation "Like an extensible COERCE."))

(defun convert (object &key ((-> to)))
"Sugary"
(convert-object object to))

(defconstant -> '-> "More sugar")

;; Example usage
(convert *thing* -> (class-of *other-thing*))

Of course, these are lame examples, but they show that Lisp *can*
incorporate little ascii-picture-symbols. Good examples would
necessarily be very domain-dependant.
--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
Alex Martelli
2003-10-12 17:06:52 UTC
Permalink
[quantum programming]
While an interesting topic, it's something I'm not going to worry about.
Me neither, for now.
And if I did, it would be in Python ;)
I suspect no existing language would be anywhere near adequate.
But if any current programming concept could stretch there, it might
be that of "unrealized until looked-into set of things", as in, Haskell's
"lazy" (nonstrict) lists. Now lists are sequential and thus quantumly
inappropriate, but perhaps it's a start.
I bring it up as a counter-example to the idea that all modes of
programming have been and can be explored in a current Lisp.
I conjectured one interesting possibility -- that of handling ensembles
of possible solutions to a given problem.
I suspect we may have to map the 'ensembles' down to sets of
items, just as we generally map concurrency down to sets of
sequential actions, in order to be able to reason about them (though
I have no proof of that conjecture). IF you have to map more
complicated intrinsics down to sequential, deterministic, univocal
"things", I'm sure you could do worse than Lisp. As to whether
that makes more sense than dreaming up completely different
languages having (e.g.) nondeterminism or multivocity as more
intrinsic concepts, I pass: it depends mostly on what human beings
will find they need to use in order to reason most effectively in
this new realm -- and quite likely different humans will find they
have different needs in this matter.
In retrospect I should have given a more obvious possibility.
As some point I hope to have computer systems I can program
by voice in English, as in "House? Could you wake me up
at 7?" That is definitely a type of programming, but Lisp is
Yeah, well, I fear the answer will be yes (it could), but it won't
do so since you haven't _asked_ it to wake you up, only if it
could. ME, I definitely don't want to use natural language with
all of its ambiguity for anything exept communicating with
other human beings, thankyouverymuch.
a language designed for text, not speed.
*blink* what does THAT doubtful assertion have to do with anything
else we were discussing just now...? I think lisp was designed for
lists (as opposed to, say, snobol, which WAS "designed for text") and
that they're a general enough data structure (and supplemented in
today's lisps with other good data structures) that they'll be quite good
for all kinds of 'normal' (deterministic &c) programming. As for speed,
I'm sure it's easier to get it out of lisp than out of python right now.
So what's your point, and its relation to the above...?
I believe it is an accepted fact that uniformity in GUI design is a good
thing because users don't need to learn arbitrarily different ways of
using different programs. You only need different ways of interaction
when a program actually requires it for its specific domain.
Yes, I agree this IS generally accepted (with, of course, some dissenters,
but in a minority).
My spreadsheet program looks different from my word processor
Sure, so do mine, but their menus are quite similar -- in as much as
it makes sense for them to have similar operations -- and ditto ditto
for their toolbars, keyboard shortcuts, etc etc. I.e. the differences
only come "when needed for a specific domain" just as Pascal just
said. So I don't know what you're intending with this answer.
is more in common. Still, the phrase "practicality beats purity" is
seems appropriate here.
Uniformity is more practical than diversity: e.g. ctrl-c as the Copy
operation everywhere means my fingers, even more than my brain, get
used to it. If you assign ctrl-c to some totally different operation in
your gui app "because you think it's more practical" you're gonna
drive me crazy, assuming I have to use your app. (That already
happens to me with the -- gnome based i think -- programs using
ctrl-z for minimize instead of undo -- I'm starting to have frayed
nerves about that even for GVIM, one of the programs I use most
often...:-).
I firmly believe people can in general easily handle much more
complicated syntax than Lisp has. There's plenty of room to
spare in people's heads for this subject.
Sure, but is it worth it?
Do you have any doubt to my answer? :)
Given the difficulty I'm having understanding your stance[s] in
this post, I do. My own answer would be that syntax sugar is
in people's head anyway because of different contexts -- infix
arithmetic taught since primary school, indentation in outlines
and pseudocode, etc etc -- so, following that already-ingrained
set of conventions takes no "room to spare in people's heads" --
indeed, the contrary -- it saves them effort. If people's head
started as "tabula rasa" it might be different, but they don't, so
that's a moot issue.

That much being said, I _do_ like prefix syntax. In some cases
I need to sum a+b+c+d and repeating that silly plus rather than
writing (+ a b c d) grates. Or I need to check a<b<c<d and
again I wish I could more summarily write (< a b c d). When I
designed my own language for bridge-hands evaluation, BBL, I
used prefix notation, though in the form operator ( operands )
[which I thought would have been easier for other bridge players
to use], e.g.:

& ( # weak NT opener requires AND of two things:
s ( 1 g 4 3 3 3 # shape 4333 (any), or
2 g 4 4 3 2 # 4432 (any), or
3 3- 3- 3- 5 # 5332 with 5 clubs, or
4 3- 3- 5 3- # 5332 with 5 diamonds
)
< ( 12 # as well as, 13-15 range for
\+ SHDC c( 4 3 2 1 0) # normal Milton-Work pointcount
16
)
)

Maybe readers are starting to understand why I don't WANT to
use a language I design myself;-). Anyway, the language was
NOT enthusiastically taken up, until I wrote code generators with
a GUI accepting conditions in more "ordinary looking" notations
and building this, ahem, intrinsically "better" one;-) -- then, but only
then, did other players start using this to customize hand generators
and the like. (Yes, I did have macros, but puny enough that they
still required operator(operands) syntax -- they basically served only
to reduce duplication, or provide some little abstraction, not to
drastically change the language syntax at all). Ah well -- maybe I
should just put the BBL (Turbo Pascal) implementation and (Italian-
language) reference manual online -- it still moves nostalgia in me!-)
Convenience is what matters. If you are able to conveniently express
solutions for hard problems, then you win. In the long run, it doesn't
My APL experience tells me this is false: conveniently expressing
solutions is HALF the problem -- you (and others!) have to be
able to read them back and maintain and alter them later too.
matter much how things behave in the background, only at first.
Personally, I would love to write equations on a screen like I
would on paper, with integral signs, radicals, powers, etc. and
not have to change my notation to meet the limitations of computer
input systems.
So jot your equations on a tablet-screen and look for a good
enriched text recognition system. What's programming gotta
do with it?
For Lisp is a language tuned to keyboard input and not the full
range of human expression. (As with speech.)
Python even more so on the output side -- try getting a screen-reader to
do a halfway decent job with it. But what does this matter here?
(I know, there are people who can write equations in TeX as
fast as they can on paper. But I'm talking about lazy ol' me
who wants the covenience.)
Or, will there ever be a computer/robot combination I can
teach to dance? Will I do so in Lisp?
You may want to teach by showing and having the computer
infer more general rules from example. Whether the inference
engine will be best built in lisp, prolog, ocaml, mozart, whatever,
I dunno. I don't think it will be optimally done in Python, though.
"Horses for courses" is my philosophy in programming.
It seems to me that in Python, just as in most other languages, you
always have to be aware that you are dealing with classes and objects.
Given the "everything is an object" (classes included) and every object
belongs to a class, you could indeed say that -- in much the same sense
as you may be said to always be aware that you're breathing air in
everyday life. Such awareness is typically very subconscious, of course.
Why should one care? Why does the language force me to see that when it
really doesn't contribute to the solution?
I'm not sure in what sense "python forces you to see" that, e.g.,
the number 2 is an object -- or how can that fail to "contribute to
the solution". Care to exemplify?
Hmmm.. Is the number '1' an object? Is a function an object?
What about a module? A list? A class?
Yes to all of the above, in Python. I don't get your point.
print sum(range(100))
4950
Where in that example are you aware that you are dealing with classes
and objects?
Me? At every step -- I know 'sum' names a builtin object that is a
function (belonging to the class of builtin functions) taking one argument
which is a sequence, 'range' names another builtin object returning
a list object, etc. I'm not directly dealing with any of their classes --
I know they belong to classes, like any object does, but I have no need
to think about them in this specific statement (in fact, I hardly ever do;
signature-based polymorphism is what I usually care about, not class
membership, far more often than not).

But I don't get your points -- neither Andrew's nor Pascal's. How does
this differ from the awareness I might have in some macro-enhanced
lisp where I would type (print (sum (range 100))) or the like?
conjecture is that additional syntax can make some things easier.
That a problem can be solved without new syntax does not
contradict my conjecture.
But even if we provisionally concede your conjecture we are still
left wondering: is the degree of easing so high that it overcomes
the inevitable increase in complication, needed for a language to
have N+1 syntax forms where previously it only had N? I.e., it's
in general a difficult engineering tradeoff, like many in language
design -- which is why I'd rather delegate the decisions on these
tradeoffs to individuals, groups and processes with a proven track
record for making a lot of them with complexive results that I find
delightful, rather than disperse them to myself & many others
(creating lots of not-quite-congruent language dialects).


Alex
Daniel P. M. Silva
2003-10-13 01:49:46 UTC
Permalink
Post by Alex Martelli
Yeah, well, I fear the answer will be yes (it could), but it won't
do so since you haven't _asked_ it to wake you up, only if it
could.
Pshaw. My hypothetical house of the 2050s or so will know
that "could" in this context is a command. :)
Good luck the first time you want to ask it about its capabilities,
and my best wishes that you'll remember to use VERY precise
phrasing then.
Hehe, I hope I never scream "NAMESPACE-MAPPED-SYMBOLS" at my house :P

- DS
David Mertz
2003-10-16 04:14:58 UTC
Permalink
|> Here's a quick rule that is pretty damn close to categorically true for
|> Python programming: If you use more than five levels of indent, you are
|> coding badly. Something is in desperate need of refactoring.

Pascal Bourguignon <spam at thalassa.informatimago.com> wrote previously:
|Here is an histogram of the depths of the top level sexps found in my
|emacs sources:
|((1 . 325) (2 . 329) (3 . 231) (4 . 163) (5 . 138) (6 . 158) (7 . 102)
| (8 . 94) (9 . 63) (10 . 40) (11 . 16) (12 . 20) (13 . 9) (14 . 4)
| (15 . 5) (16 . 4) (17 . 2) (19 . 2) (23 . 1))
|Am I depraved in writting code with depths down to 23?

As I've written lots of times in these threads, I haven't really used
Lisp. In fact, I really only did my first programming in Scheme (for an
article on SSAX) in the last couple weeks; I know Scheme isn't Common
Lisp, no need to point that out again. However, I -have- read a fair
number of snippets of Lisp code over the years, so my familiarity runs
slightly deeper than the couple weeks.

All that said, my gut feeling is that depth 23 is, indeed, ill-designed.
Even the more common occurrences of 12 or 13 levels seems like a lot
more than my brain can reason about. I'm happy to stipulate that
Bourguignon is smarter than I am... but I'm still confident that I can
do this sort of thinking better than 95% of the people who might have to
READ his code.

And the purpose of code, after all, is to COMMUNICATE ideas: firstly to
other people, and only secondly to machines.

|Ok, in Python, you may have also expressions that will further deepen
|the tree, but how can you justify an artificial segregation between
|indentation and sub-expressions?

Because it's Python! There is a fundamental syntactic distinction
between statments and expressions, and statements introduce blocks
(inside suites, bare expressions can occur though--usually functions
called for their side effects). It is the belief of the BDFL--and of
the large majority of programmers who use Python--that a syntactic
distinction between blocks with relative indention and experessions
that nest using parens and commas AIDS READABILITY.

I can say experientially, and from reading and talking to other
Pythonistas, that my brain does a little flip when it finishes
identifying a suite, then starts identifying the parts of an expression.
And conveniently, in Python, the sort of thinking I need to do when I
read or write the lines of a suite is a bit different than for the parts
of an expression. Not just because I am deceived by the syntax, but
because the language really does arrange for a different sort of thing
to go on in statements versus expressions (obviously, there are SOME
overlaps and equivalences; but there's still a useful pattern to the
distinction).

Still, for a real comparison of depth, I suppose I'd need to look at the
combined depth of indent+paren-nesting. Even for that, well-designed
Python programs top out at 7 or 8, IMO. Maybe something a little deeper
reasonably occurs occassionally, but the histogram would sure look
different from Pascal's ("Flat is better than nested").

Yours, David...

--
---[ to our friends at TLAs (spread the word) ]--------------------------
Iran nuclear neocon POTUS patriot Pakistan weaponized uranium invasion UN
smallpox Gitmo Castro Tikrit armed revolution Carnivore al-Qaeda sarin
---[ Gnosis Software ("We know stuff") <mertz at gnosis.cx> ]---------------
Pascal Costanza
2003-10-09 13:59:24 UTC
Permalink
you can use macros to do everything one could use HOFs for (if you
really want).
I should have added: As long as it should execute at compile time, of
course.
Really? What about arbitrary recursion?
I don't see the problem. Maybe you have an example? I am sure the
Lisp'ers here can come up with a macro solution for it.
I'm not terribly familiar with the details of Lisp macros but since
recursion can easily lead to non-termination you certainly need tight
restrictions on recursion among macros in order to ensure termination of
macro substitution, don't you? Or at least some ad-hoc depth limitation.
The Lisp mindset is not to solve problems that you don't have.

If your code has a bug then you need to debug it. Lisp development
environments provide excellent debugging capabilities out of the box.
Don't guess how hard it is when you don't have the experience yet.


Pascal
--
Pascal Costanza University of Bonn
mailto:costanza at web.de Institute of Computer Science III
http://www.pascalcostanza.de R?merstr. 164, D-53117 Bonn (Germany)
David Eppstein
2003-10-22 03:33:23 UTC
Permalink
In article <1jclovopokeogrdajo6dfmhm090cdllfki at 4ax.com>,
It's certainly true that mathematicians do not _write_
proofs in formal languages. But all the proofs that I'm
aware of _could_ be formalized quite easily. Are you
aware of any counterexamples to this? Things that
mathematicians accept as correct proofs which are
not clearly formalizable in, say, ZFC?
I am not claiming that it is a counterexample, but I've always met
with some difficulties imagining how the usual proof of Euler's
theorem about the number of corners, sides and faces of a polihedron
(correct terminology, BTW?) could be formalized. Also, however that
could be done, I feel an unsatisfactory feeling about how complex it
would be if compared to the conceptual simplicity of the proof itself.
Which one do you think is the usual proof?
http://www.ics.uci.edu/~eppstein/junkyard/euler/

Anyway, this exact example was the basis for a whole book about what is
involved in going from informal proof idea to formal proof:
http://www.ics.uci.edu/~eppstein/junkyard/euler/refs.html#Lak
--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
Alex Martelli
2003-10-04 19:48:54 UTC
Permalink
Bengt Richter wrote:
...
I like the Bunch class, but the name suggests vegetables to me ;-)
Well, I _like_ vegetables...
BTW, care to comment on a couple of close variants of Bunch with
per-object class dicts? ...
def mkNSC(**kwds): return type('NSC', (), kwds)()
Very nice (apart from the yecchy name;-).
or, stretching the one line a bit to use the instance dict,
def mkNSO(**kwds): o=type('NSO', (), {})(); o.__dict__.update(kwds);
return o
I don't see the advantage of explicity using an empty dict and then
updating it with kwds, vs using kwds directly.
I'm wondering how much space is actually wasted with a throwaway class. Is
there a lazy copy-on-write kind of optimization for class and instance
dicts that prevents useless proliferation? I.e.,
I strongly doubt there's any "lazy copy-on-write" anywhere in Python.
The "throwaway class" will be its dict (which, here, you need -- that's
the NS you're wrapping, after all) plus a little bit (several dozen bytes
for the typeobject, I'd imagine); an instance of Bunch, probably a bit
smaller. But if you're going to throw either away soon, who cares?
but I think the "purer" (more extreme) versions are
interesting "tipizations" for the languages, anyway.
Oh goody, a new word (for me ;-). Would you define "tipization"?
I thought I was making up a word, and slipped by spelling it
as in Italiano "tipo" rather than English "type". It appears
(from Google) that "typization" IS an existing word (sometimes
mis-spelled as "tipization"), roughly in the meaning I intended
("characterization of types") -- though such a high proportion
of the research papers, institutes, etc, using "typization",
seems to come from Slavic or Baltic countries, that I _am_
left wondering...;-).


Alex
Alan Crowe
2003-10-04 11:31:45 UTC
Permalink
If a set of macros could be written to improve LISP
syntax, then I think that might be an amazing thing. An
interesting question to me is why hasn't this already been
done.
I think the issue is the grandeur of the Lisp vision. More
ambitious projects require larger code bases. Ambition is
hard to quantify. Nevertheless one must face the issue of
scaling. Does code size go as the cube of ambition, or is it
the fourth power of ambition? Or something else entirely.

Lisp aspires to change the exponent, not the constant
factor. The constant factor is very important. That is why
CL has FILL :-) but shrinking the constant factor has been
done (and with C++ undone).

Macros can be used to abbreviate code. One can spot that one
is typing similar code over and over again. One says
"whoops, I'm typing macro expansions". Do you use macros to
tune the syntax, so that you type N/2 characters instead of
N characters, or do you capture the whole concept in macro
and eliminate the repetition altogether?

The point is that there is nowhere very interesting to go
with syntax tuning. It is the bolder goal of changing the
exponent, and thus seriously enlarging the realm of the
possible, that excites.

Alan Crowe
Ingvar Mattsson
2003-10-09 11:16:09 UTC
Permalink
method overloading,
Joe> Now I'm *really* confused. I thought method overloading involved
Joe> having a method do something different depending on the type of
Joe> arguments presented to it. CLOS certainly does that.
He probably means "operator overloading" -- in languages where
there is a difference between built-in operators and functions,
their OOP features let them put methods on things like "+".
Lisp doesn't let you do that, because it turns out to be a bad idea.
When you go reading someone's program, what you really want is for
the standard operators to be doing the standard and completely
understood thing.
Though if one *really* wants to have +, -, * and / as generic
functions, I imagine one can use something along the lines of:

(defpackage "GENERIC-ARITHMETIC"
(:shadow "+" "-" "/" "*")
(:use "COMMON-LISP"))

(in-package "GENERIC-ARITHMETIC")
(defgeneric arithmetic-identity (op arg))

(defmacro defarithmetic (op)
(let ((two-arg
(intern (concatenate 'string "TWO-ARG-" (symbol-name op))
"GENERIC-ARITHMETIC"))
(cl-op (find-symbol (symbol-name op) "COMMON-LISP")))
`(progn
(defun ,op (&rest args)
(cond ((null args) (arithmetic-identity ,op nil))
((null (cdr args))
(,two-arg (arithmetic-identity ,op (car args))
(car args)))
(t (reduce (function ,two-arg)
(cdr args)
:initial-value (car args)))))
(defgeneric ,two-arg (arg1 arg2))
(defmethod ,two-arg ((arg1 number) (arg2 (number)))
(,cl-op arg1 arg2)))))

Now, I have (because I am lazy) left out definitions of the generic
function ARITHMETIC-IDENTITY (general idea, when fed an operator and
NIL, it returns the most generic identity, when fed an operator and an
argument, it can return a value that is more suitable) and there's
probably errors in the code, too.

But, in principle, that should be enough of a framework to build from,
I think.

//Ingvar
--
My posts are fair game for anybody who wants to distribute the countless
pearls of wisdom sprinkled in them, as long as I'm attributed.
-- Martin Wisse, in a.f.p
Edi Weitz
2003-10-16 21:08:43 UTC
Permalink
For simple use of built-in libraries,
http://online.effbot.org/2003_08_01_archive.htm#troll
looks like a good test case.
Quick hack follows.

edi at bird:/tmp > cat troll.lisp
(asdf:oos 'asdf:load-op :aserve)
(asdf:oos 'asdf:load-op :cl-ppcre)

(defparameter *scanner*
(cl-ppcre:create-scanner
"<a href=\"AuthorThreads.asp[^\"]*\">([^<]+)</a></td>\\s*
<td align=\"center\">[^<]+</td>\\s*
<td align=\"center\">[^<]+</td>\\s*
<td align=\"center\">\\d+</td>\\s*
<td align=\"center\">(\\d+)</td>\\s*
<td align=\"center\">(\\d+)</td>\\s*
<td align=\"center\">\\d+</td>\\s*
<td align=\"center\">(\\d+)</td>\\s*"))

(defun troll-checker (name)
(let ((target
(net.aserve.client:do-http-request
(format nil "http://netscan.research.microsoft.com/Static/author/authorprofile.asp?searchfor=~A" name)
:protocol :http/1.0)))
(cl-ppcre:do-scans (match-start match-end reg-starts reg-ends *scanner* target)
(flet ((nth-group (n)
(subseq target (aref reg-starts n) (aref reg-ends n))))
(let* ((group (nth-group 0))
(posts (parse-integer (nth-group 1)))
(replies (parse-integer (nth-group 2)))
(threads-touched (parse-integer (nth-group 3)))
(reply-to-post-ratio (/ replies posts))
(threads-to-post-ratio (/ threads-touched posts)))
(unless (< posts 10)
(format t "~55A R~,2F T~,2F ~:[~;TROLL~:[?~;!~]~]~%"
(subseq group 0 (min 55 (length group)))
reply-to-post-ratio
threads-to-post-ratio
(and (> reply-to-post-ratio .8)
(< threads-to-post-ratio .4))
(< threads-to-post-ratio .2))))))))

(compile 'troll-checker)

edi at bird:/tmp > cmucl
; Loading #p"/home/edi/.cmucl-init".
CMU Common Lisp 18e, running on bird.agharta.de
With core: /usr/local/lib/cmucl/lib/lisp.core
Dumped on: Thu, 2003-04-03 15:47:12+02:00 on orion
Send questions and bug reports to your local CMUCL maintainer,
or see <http://www.cons.org/cmucl/support.html>.
Loaded subsystems:
Python 1.1, target Intel x86
CLOS 18e (based on PCL September 16 92 PCL (f))
* (load "troll")

; loading system definition from /usr/local/lisp/Registry/aserve.asd into
; #<The ASDF1017 package, 0/9 internal, 0/9 external>
; registering #<SYSTEM ASERVE {4854AEF5}> as ASERVE
; loading system definition from /usr/local/lisp/Registry/acl-compat.asd into
; #<The ASDF1059 package, 0/9 internal, 0/9 external>
; registering #<SYSTEM ACL-COMPAT {4869AD35}> as ACL-COMPAT
; loading system definition from /usr/local/lisp/Registry/htmlgen.asd into
; #<The ASDF1145 package, 0/9 internal, 0/9 external>
; registering #<SYSTEM HTMLGEN {487E64C5}> as HTMLGEN
; loading system definition from /usr/local/lisp/Registry/cl-ppcre.asd into
; #<The ASDF1813 package, 0/9 internal, 0/9 external>
; registering #<SYSTEM #:CL-PPCRE {48F32835}> as CL-PPCRE
; Compiling LAMBDA (NAME):
; Compiling Top-Level Form:
T
* (troll-checker "edi at agharta.de")
comp.lang.lisp R0.93 T0.63
NIL
* (troll-checker "eppstein at ics.uci.edu")
rec.photo.digital R1.00 T0.76
rec.arts.sf.written R0.99 T0.57
comp.lang.python R0.98 T0.64
rec.photo.equipment.35mm R1.00 T0.73
sci.math R1.00 T0.77
rec.puzzles R1.00 T0.75
comp.theory R1.00 T0.56
comp.graphics.algorithms R1.00 T0.87
comp.sys.mac.apps R1.00 T0.69
NIL
* (troll-checker "spam at thalassa.informatimago.com")
comp.lang.lisp R0.91 T0.44
fr.comp.os.unix R1.00 T0.70
es.comp.os.linux.programacion R1.00 T0.67
fr.comp.lang.lisp R1.00 T0.40 TROLL?
comp.unix.programmer R1.00 T0.92
sci.space.moderated R1.00 T0.43
gnu.emacs.help R0.95 T0.84
sci.space.policy R1.00 T0.33 TROLL?
alt.folklore.computers R1.00 T0.43
comp.lang.scheme R0.83 T0.58
fr.comp.os.mac-os.x R0.92 T0.83
NIL

Granted, Portable AllegroServe[1] and CL-PPCRE[2] aren't "built-in"
(but freely available, compatible with various CL compilers, and easy
to install) and Python might have a bit more syntactic sugar but it
wasn't _too_ hard to do that in Lisp.

Edi

[1] <http://portableaserve.sf.net/>
[2] <http://weitz.de/cl-ppcre/>
Dave Benjamin
2003-10-09 03:01:02 UTC
Permalink
For instance, I always thought this was a cooler alternative to the
try/finally block to ensure that a file gets closed (I'll try not to
open('input.txt', { |f|
do_something_with(f)
do_something_else_with(f)
})
f = open('input.txt')
do_something_with(f)
do_something_else_with(f)
f.close()
"Explicit is better than implicit"
In that case, why do we eschew code blocks, yet have no problem with the
implicit invocation of an iterator, as in:

for line in file('input.txt'):
do_something_with(line)

This is not to say that I dislike that behavior; in fact, I find it
*beneficial* that the manner of looping is *implicit* because you can
substitute a generator for a sequence without changing the usage. But
there's little readability difference, IMHO, between that and:

file('input.txt').each_line({ |line|
do_something_with(line)
})

Plus, the first example is only obvious because I called my iteration
variable "line", and because this behavior is already widely known. What
if I wrote:

for byte in file('input.dat'):
do_something_with(byte)

That would be a bit misleading, no? But the mistake isn't obvious. OTOH,
in the more explicit (in this case) Ruby language, it would look silly:

open('input.txt').each_line { |byte|
# huh? why a byte? we said each_line!
}

I think this is important to point out, because the implicit/explicit
rule comes up all the time, yet Python is implicit about lots of things!
To name a few:

- for loops and iterators
- types of variables
- dispatching via subclass polymorphism
- coercion (int->float, int->long...)
- exceptions (in contrast with Java's checked exceptions)
- __magic_methods__
- metaclasses
- nested scopes (compared to yesteryear's lambda x, y=y, z=z: ...)
- list comprehensions

In all of the above cases (with a bit of hesitation toward the voodoo of
metaclasses) I think Python is a better language for it. On the other
hand, Perl's implicit $_ variable is a good example of the hazards of
implicitness; that can be downright confusing. So, it's not cut and dry
by any means.

If all you're saying is that naming something is better than not naming
something because explicit is better than implicit, I'd have to ask why:

a = 5
b = 6
c = 7
d = a + b
e = c / 2
result = d + e
return result

Is any better than:

...
return (a + b) + (c / 2)

To me, it's the same issue. Why should I have to name something that I'm
just going to return in the next statement, or pass as a parameter, and
then be done with it? Does that really increase either readability or
understandability? Why should I name something that I'm not going to ask
for later?
Even your example clearly shows that try block is much more readable and
understandable.
That's why it's being considered evil by majority of python developers.
Readability is a moving target. I think that the code block syntax
strikes a nice balance between readability and expressiveness. As far as
what the majority of Python developers consider evil, I don't think
we've got the stats back on that one.
But the anonymous version still looks more concise to me.
Python prioritize things diferently than other languages.
It's not an APL. "Readability counts"
This is nothing like APL... if anything, it's like Smalltalk, a language
designed to be readable by children! I realize that APL sacrificed
readability for expressiveness to an uncomfortable extreme, but I really
think you're comparing apples and oranges here. List comprehensions are
closer to APL than code blocks.

Dave
Raymond Wiker
2003-10-06 11:09:00 UTC
Permalink
1.) Inventing new control structures (implement lazy data structures,
implement declarative control structures, etc.)
=> This one is rarely needed in everyday application programming and
can easily be misused.
This is, IMHO, wrong. One particular example is creating
macros (or read macros) for giving values to application-specific data
structures.
You have to know if you want a sharp knife (which may hurt you when
misused) or a less sharper one (where it takes more effort to cut
with).
It is easier to hurt yourself with a blunt knife than a sharp
one.
--
Raymond Wiker Mail: Raymond.Wiker at fast.no
Senior Software Engineer Web: http://www.fast.no/
Fast Search & Transfer ASA Phone: +47 23 01 11 60
P.O. Box 1677 Vika Fax: +47 35 54 87 99
NO-0120 Oslo, NORWAY Mob: +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
Corey Coughlin
2003-10-11 00:00:48 UTC
Permalink
You are mostly correct about Japanese, I took a year of it in college
and it is a fairly standard SOV language. (Like Latin, oddly enough.)
And I'm sure you're right about RPN vs. PN, I always get those
confused. Which is kind of the point, really, having studied math
since I was a kid I got used to stuff like "y = mx + b", can you
blame me if I have an easier time with "y = m*x + b" as opposed to
"(let y (+ (* m x) b))" (Forgive me if the parenthesis on that are
off, the newsreader editor doesn't match them, and maybe I need a
'setq' instead of a 'let' or some other thing, I'm not entirely sure.)
(And again, is the point getting more clear?) And thanks for backing
me up on car's and cdr's, I never could understand why a language
ostensibly designed for 'list processing' has such a bizarre way to
reference items in a list. But is (nth 10 mylist) really as easy as
mylist[10]? My intuition says no, not really.

Sure, I can appreciate looking at things in different ways, and it is
nice to learn new things and see how they apply. But if John Grisham
learns Japanese, does that mean he should write all his books in
Japanese? Or should he stick to English? I suppose if I were a real
CS guy (I'm actually an electrical engineer, the scheme course was one
of the two CS courses I took in college, so I'm mostly self taught) or
if I worked within a big group of Lisp programmers, I would probably
feel more comfortable with it. Since I now mostly work as an isolated
programmer for other engineers, and the last language I was using for
everything was C, Python is a huge improvement, and it doesn't give me
too much of a headache. Sure, it's not perfect. But there's no way
I'm going to adopt Lisp as a perfect language anytime soon. That is,
if I want to keep hitting my deadlines and getting paid. And sure, I
may get comfortable and miss out on cool stuff, but on the upside,
I'll be comfortable.

Oh, and if I'm writing in this thread, I suppose I should comment on
how bad lisp macros are. Except I know nothing about them. But it
seems like most languages have dark corners like that, where you can
do thing above and beyond your standard programming practices. Python
has metaclasses, which give me a headache most of the time, so I don't
really use them at all. But I seem to get plenty of stuff done
without using them, so it works for me. If you really have to use
macros in Lisp to get things done, that sounds kind of troublesome,
but it would be consistent, it always seemed like really working well
in Lisp requires you to really know how everything works all at once,
which always kind of struck me as kind of a downside. But as I said,
I'm not the big CS guru, so Lisp just may not be for me in general.
Ah well, I suppose I'll get by with Python. :D
(Not to mention car, cdr, cadr, and
so on vs. index notation, sheesh.)
Yes, that is a real regret. It should have been useful to support
a kind of (nth 10 mylist) straight from the Scheme standard library.
Using parentheses and rpn everywhere makes lisp very easy
to parse, but I'd rather have something easy for me to understand and
That's why I prefer python, you
get a nice algebraic syntax with infix and equal signs, and it's easy
understand.
Python is
intuitive to me out of the box, and it just keeps getting better, so I
think I'll stick with it.
First, a minor correction: Lisp/Scheme is like (* 1 2) and that is
Polish Notation or prefix; Reverse Polish Notation or postfix would be
like (1 2 *).
From what I heard about the Japanese language I have formed the
possibly oversimplified impression that it is largely postfix.
Whereas in English we say "I beat you", they may say something like "I
you beat". So I suppose all of the existing programming notations -
Lisp's and Cobol's (* 1 2) and MULTIPLY 1 BY 2, Fortran's "intuitive"
1+2, and OO's one.add(two) - are very counterintuitive to them, and
they would really like the way of HP calculators, no?
And I suppose the ancient Romans (and even the modern Vaticans) would
laugh at this entire dilemma (or trilemma?) between ___fixes.
Intuition is acquired. It is purely a product of education or
brainwashing. There is nothing natural about it. And since it is
acquired, you may as well keep acquiring new intuitions and see more
horizons, rather than keep reinforcing old intuitions and stagnate.
Appreciating a foreign language such as Japanese some day is not a bad
idea.
Raffael Cavallaro
2003-10-12 19:54:59 UTC
Permalink
Lispniks are driven by the assumption that there is always the
unexpected. No matter what happens, it's a safe bet that you can make
Lisp behave the way you want it to behave, even in the unlikely event
that something happens that no language designer has ever thought of
before. And even if you cannot find a perfect solution in some cases,
you will at least be able to find a good approximation for hard
problems.
This I believe is the very crux of the matter. The problem domain to
which lisp has historically been applied, artificial intelligence,
more or less guaranteed that lisp hackers would run up against the
sorts of problems that no one had ever seen before. The language
therefore evolved into a "programmable programming language," to quote
John Foderaro (or whoever first said or wrote this now famous line).

Lisp gives the programmer who knows he will be working in a domain
that is not completely cut and dried, the assurance that his language
will not prevent him for doing something that has never been done
before. Python gives me the distinct impression that I might very well
run up against the limitations of the language when dealing with very
complex problems.

For 90% of tasks, even large projects, Python will certainly have
enough in its ever expanding bag of tricks to provide a clean,
maintainable solution. But that other 10% keeps lisp hackers from
using Python for exploratory programming - seeking solutions in
problem domains that have not been solved before.
Andrew Dalke
2003-10-09 18:45:47 UTC
Permalink
i realize that this thread is hopelessly amorphous, but this post did
introduce some concrete issues which bear concrete responses...
Thank you for the commentary.
i got only as far as the realization that, in order to be of any use,
unicode
data management has to support the eventual primitive string operations.
which
introduces the problem that, in many cases, these primitive operations
eventually devolve to the respective os api. which, if one compares apple
and
unix apis are anything but uniform. it is simply not possible to provide
them
with the same data and do anything worthwhile. if it is possible to give
some
concrete pointers to how other languages provide for this i would be
grateful.

Python does it by ignoring the respective os APIs, if I understand
your meaning and Python's implementation correctly. Here's some
more information about Unicode in Python

http://www.python.org/peps/pep-0100.html
http://www.python.org/peps/pep-0261.html
http://www.python.org/peps/pep-0277.html

http://www.python.org/doc/current/ref/strings.html

http://www.python.org/doc/current/lib/module-unicodedata.html
http://www.python.org/doc/current/lib/module-codecs.html
and i have no idea what people do with surrogate pairs.
See PEP 261 listed above for commentary, and you may want
to email the author of that PEP, Paul Prescod. I am definitely
not the one to ask.
yes, there are several available common-lisp implementations for http
clients
and servers. they offer significant trade-offs in api complexity,
functionality, resource requirements and performance.
And there are several available Python implementations for the same;
Twisted's being the most notable. But the two main distributions (and
variants like Stackless) include a common API for it, which makes
it easy to start, and for most cases is sufficient.

I fully understand that it isn't part of the standard, but it would be
useful if there was a consensus that "packages X, Y, and Z will
always be included in our distributions."
if one needs to _port_ it to a new lisp, yes. perhaps you skipped over the
list of lisps to which it has been ported. if you look at the #+/-
conditionalization, you may observe that the differences are not
significant.

You are correct, and I did skip that list.

Andrew
dalke at dalkescientific.com
Edi Weitz
2003-10-10 09:13:03 UTC
Permalink
Conjecture: Is it that the commericial versions of Lisp pull away
some of the people who would otherwise help raise the default
functionality of the free version? I don't think so... but then
why?
I'm pretty sure this is the case. If you lurk in c.l.l for a while
you'll see that a large part of the regular contributors aren't what
you'd call Open Source zealots. Maybe that's for historical reasons, I
don't know. But of course that's different from languages which have
always been "free" like Perl or Python.

To give one example: One of the oldest dynamic HTTP servers out there
is CL-HTTP.[1] I think it is very impressive but it has a somewhat
dubious license which doesn't allow for commercial deployment - it's
not "free." Maybe, I dunno, it would have a much higher market share
if it had been licensed like Apache when it was released. I'm sure
there's much more high-quality Lisp software out there that hasn't
even been released.

Edi.

[1] <http://www.ai.mit.edu/projects/iiip/doc/cl-http/home-page.html>

Don't be fooled by the "Last updated" line. There's still active
development - see

<ftp://ftp.ai.mit.edu/pub/users/jcma/cl-http/devo>.
MetalOne
2003-10-15 18:05:16 UTC
Permalink
Raffael Cavallaro

I don't know why but I feel like trying to summarize.

I initially thought your position was that lambdas should never be
used. I believe that Brian McNamara and Ken Shan presented powerful
arguments in support of lambda. Your position now appears to have
changed to state that lambdas are ok to use, but their use should be
restricted. One point would appear to desire avoiding duplicate
lambdas. This makes sense. Duplication of this sort is often found
in "if statment" conditional tests also. The next point would be to
name the function if a good name can be found. I believe that
sometimes the code is clearer than a name. Mathematical notation was
invented because natural language is imprecise. Sometimes a name is
better than the code. The name gives a good idea of the "how" and
perhaps you can defer looking at the "how". Sometimes I think using
code in combination with a comment is better. A comment can say a
little more than a name, and the code gives the precision. So as
Marcin said, it is a balancing act to create readable code.

I would like to say that I have found this entire thread very
comforting. I have been programming for 18 years now. For the most
part, when I read other peoples code I see nothing but 300+ line
functions. I have come to feel like most programmer's have no idea
what they are doing. But when you're writing small functions and
everybody else is writing 300+ line functions you begin to wonder if
it is you that is doing something wrong. It is nice to see that other
people actually do think about how to write and structure good code.
Erann Gat
2003-10-06 19:19:54 UTC
Permalink
In article <eppstein-9700A3.10461306102003 at news.service.uci.edu>, David
In article
<my-first-name.my-last-name-0610030955090001 at k-137-79-50-101.jpl.nasa.go
v>,
: (with-collector collect
: (do-file-lines (l some-file-name)
: (if (some-property l) (collect l))))
: This returns a list of all the lines in a file that have some property.
OK, that's _definitely_ just a filter: filter someproperty somefilename
Perhaps throw in a fold if you are trying to abstract "collect".
The net effect is a filter, but again, you need to stop thinking about the
"what" and start thinking about the "how", otherwise, as I said, there's
no reason to use anything other than machine language.
Answer 1: literal translation into Python. The closest analogue of
with-collector etc would be Python's simple generators (yield keyword)
yield l
You left out the with-collector part.

But it's true that my examples are less convincing given the existence of
yield (which I had forgotten about). But the point is that in pre-yield
Python you were stuck until the langauge designers got around to adding
it.

I'll try to come up with a more convincing short example if I find some
free time today.

E.
Daniel P. M. Silva
2003-10-08 16:50:39 UTC
Permalink
<posted & mailed>
...
You still can't add new binding constructs or safe parameterizations like
with_directory("/tmp", do_something())
Where do_something() would be evaluated with the current directory set to
" tmp" and the old pwd would be restored afterward (even in the event of
an exception).
with_directory("/tmp", do_something)
*deferring* the call to do_something to within the with_directory
function. Python uses strict evaluation order, so if and when you
choose to explicitly CALL do_something() it gets called,
pwd = os.getcwd()
try: return thefunc(*args, **kwds)
finally: os.chdir(pwd)
this is of course a widespread idiom in Python, e.g. see
unittest.TestCase.assertRaises for example.
The only annoyance here is that there is no good 'literal' form for
a code block (Python's lambda is too puny to count as such), so you
do have to *name* the 'thefunc' argument (with a 'def' statement --
Python firmly separates statements from expressions).
That was my point. You have to pass a callable object to with_directory,
plus you have to save in that object any variables you might want to use,
when you'd rather say:

x = 7
with_directory("/tmp",
print "well, now I'm in ", os.getpwd()
print "x: ", x
x = 3
)
Last year -- I think at LL2 -- someone showed how they added some sort of
'using "filename":' form to Python... by hacking the interpreter.
A "using" statement (which would take a specialized object, surely not
a string, and call the object's entry/normal-exit/abnormal-exit methods)
might often be a good alternative to try/finally (which makes no provision
for 'entry', i.e. setting up, and draws no distinction between normal
and abnormal 'exits' -- often one doesn't care, but sometimes yes). On
this, I've seen some consensus on python-dev; but not (yet?) enough on
the details. Consensus is culturally important, even though in the end
Guido decides: we are keen to ensure we all keep using the same language,
rather than ever fragmenting it into incompatible dialects.
The point is that the language spec itself is changed (along with the
interpreter in C!) to add that statement. I would be happier if I could
write syntax extensions myself, in Python, and if those extensions worked
on CPython, Jython, Python.Net, Spy, etc.
Some people use Python's hooks to create little languages inside Python
(eg. to change the meaning of instantiation), which are not free of
[invariably spelt as 'self', not 'this', but that's another issue]
this.rest = args
this.keys = kwargs
count[0] = count[0] + 1
return count[0]
obj.object_id = id
return obj
def obj_id(obj): return obj.object_id
tag_obj(object.__new__(type), new_obj_id())))
...
# forgot to check for this case...
print Object(foo="bar")
It's not an issue of "checking": you have written (in very obscure
and unreadable fashion) a callable which you want to accept (and
ignore) keyword arguments, but have coded it in such a way that it
in fact refuses keyword arguments. Just add the **kwds after the
you might forget to specify arguments which you do want your callable
to accept and ignore in a wide variety of other contexts, too.
I think changing the meaning of __new__ is a pretty big language
modification...

- Daniel
Jock Cooper
2003-10-07 01:02:16 UTC
Permalink
Post by Erann Gat
In article <eppstein-9700A3.10461306102003 at news.service.uci.edu>, David
In article
<my-first-name.my-last-name-0610030955090001 at k-137-79-50-101.jpl.nasa.go
v>,
: (with-collector collect
: (do-file-lines (l some-file-name)
: (if (some-property l) (collect l))))
: This returns a list of all the lines in a file that have some property.
OK, that's _definitely_ just a filter: filter someproperty somefilename
Perhaps throw in a fold if you are trying to abstract "collect".
The net effect is a filter, but again, you need to stop thinking about the
"what" and start thinking about the "how", otherwise, as I said, there's
no reason to use anything other than machine language.
Answer 1: literal translation into Python. The closest analogue of
with-collector etc would be Python's simple generators (yield keyword)
yield l
You left out the with-collector part.
But it's true that my examples are less convincing given the existence of
yield (which I had forgotten about). But the point is that in pre-yield
Python you were stuck until the langauge designers got around to adding
it.
I'll try to come up with a more convincing short example if I find some
free time today.
I'm afraid it's very hard to give any convincing examples of the
utility of macros -- as long as you are sticking to trivial examples.
On the other hand, you can't exactly post real world complex examples
of how macros saved you time and LOC (we all have em) because reader's
eyes would just glaze over. I think macros are just another one of
CL's features that some most people just don't get until they actually
use them. But here's a small one:

I wrote about 60 lines worth of macro based code (including a few reader
macros) that allows me to write things like:

(with-dbconnection
(sql-loop-in-rows
"select col1, col2 from somewhere where something"
:my-package row-var "pfx"
...
...some code...
...))

In the "some code" section, the result columns' values are accessed by
#!pfx-colname (eg #!pfx-col1), or directly from row-var using
#?pfx-colname (which returns the position). Also, error handling code
can be automatically included by the macro code. How much time and
effort (and possible bugs) has this saved me? Well at least 60+ lines
or more of boilerplate every time I use this pattern.. Plus the expansions
for #!colname include error checks/warnings etc. -- all hidden from view.

Jock
---
www.fractal-recursions.com
Greg Ewing (using news.cis.dfn.de)
2003-10-13 02:28:57 UTC
Permalink
It has sometimes been said that Lisp should use first and
rest instead of car and cdr
I used to think something like that would be more logical, too.
Until one day it occurred to me that building lists is only
one possible, albeit common, use for cons cells. A cons cell
is actually a completely general-purpose two-element data
structure, and as such its accessors should have names that
don't come with any preconceived semantic connotations.

From that point of view, "car" and "cdr" are as good
as anything!
--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg
Pascal Costanza
2003-10-15 20:22:13 UTC
Permalink
In article <bmgh32$1a32$1 at f1node01.rhrz.uni-bonn.de>,
...
I think that's the essential point here. The advantage of the names
car and cdr is that they _don't_ mean anything specific.
gdee, you should read early lisp history ;-). car and cdr ha[d|ve] a
very specific meaning
Yes, but noone (noone at all) refers to that meaning anymore. It's a
historical accident that doesn't really matter anymore when developing code.


Pascal
Björn Lindberg
2003-10-10 14:16:42 UTC
Permalink
If your problems are trivial, I suppose the presumed lower startup
costs of Python may mark it as a good solution medium.
I find no significant difference in startup time between python and
mzscheme.
My preliminary results in this very important benchmark indicates that
python performs equally well to the two benchmarked Common Lisps:

200 bjorn at nex:~> time for ((i=0; i<100; i++)); do lisp -noinit -eval '(quit)'; done

real 0m2,24s
user 0m1,36s
sys 0m0,83s
201 bjorn at nex:~> time for ((i=0; i<100; i++)); do lisp -noinit -eval '(quit)'; done

real 0m2,24s
user 0m1,39s
sys 0m0,82s
202 bjorn at nex:~> time for ((i=0; i<100; i++)); do clisp -q -x '(quit)'; done

real 0m2,83s
user 0m1,74s
sys 0m1,03s
203 bjorn at nex:~> time for ((i=0; i<100; i++)); do clisp -q -x '(quit)'; done

real 0m2,79s
user 0m1,67s
sys 0m1,09s
204 bjorn at nex:~> time for ((i=0; i<100; i++)); do python -c exit; done

real 0m2,41s
user 0m1,85s
sys 0m0,52s
205 bjorn at nex:~> time for ((i=0; i<100; i++)); do python -c exit; done

real 0m2,41s
user 0m1,89s
sys 0m0,52s

</sarcasm>


Bj?rn
Jacek Generowicz
2003-10-15 16:04:46 UTC
Permalink
HOFs are not a special Lisp thing. Haskell does them much better,
for example... and so does Python.
the alpha and omega of HOFs is that functions are first class
objects that can be passed and returned.
How do you reconcile these two statements ?

[Hint: functions in Lisps are "first class objects that can be passed
and returned"; how does Python (or Haskell) do this "alpha and omega
of HOFs" "much better" ?]
David Eppstein
2003-10-16 18:43:06 UTC
Permalink
In article <wub512qq.fsf at ccs.neu.edu>, Joe Marshall <jrm at ccs.neu.edu>
Did it occur to you that people maybe use python not so much because they
are
retards but because it's vastly more effective than CL at the tasks they
currently need to perform? Should I send you a few hundred lines of my
python
code so that you can try translating them into CL?
Sounds like an interesting challenge...
For simple use of built-in libraries,
http://online.effbot.org/2003_08_01_archive.htm#troll
looks like a good test case.
--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
Kenny Tilton
2003-10-12 01:26:39 UTC
Permalink
...
The very 'feature' that was touted by Erann Gat as macros' killer
advantage in the WITH-CONDITION-MAINTAINED example he posted is the
crucial difference: functions (HO or not) and classes only group some
existing code and data; macros can generate new code based on examining,
and presumably to some level *understanding*, a LOT of very deep things
about the code arguments they're given.
Stop, your scaring me. You mean to say there are macros out there whose
output/behavior I cannot predict? And I am using them in a context where
I need to know what the behavior will be? What is wrong with me? And
what sort of non-deterministic macros are these, that go out and make
their own conclusions about what I meant in some way not documeted?
Let's start with that WITH-CONDITION-MAINTAINED example of Gat. Remember
it?
No, and Google is not as great as we think it is. :( I did after
extraordinary effort (on this my second try) find the original, but that
was just an application of the macro, not its innards, and I did not get
enough from his description to make out what it was all about. Worse, I
could not find your follow-up objections. I had stopped following this
thread to get some work done (and because I think the horse is dead).

All I know is that you are trying to round up a lynch mob to string up
WITH-MAINTAINED-CONDITION, and thet Lisp the language is doomed to
eternal damnation if we on c.l.l. do not denounce it. :)

No, seriously, what is your problem? That the macro would wlak the code
of the condition to generate a demon that would not only test the
condition but also do things to maintain the condition, based on its
parsing of the code for the condition?

You got a problem with that? Anyway, this is good, I was going to say
this chit chat would be better if we had some actual macros to fight over.

[Apologies up front: I am guessing left and right at both the macro and
your objections. And ILC2003 starts tomorrow, so I may not get back to
you all for a while.]

kenny

ps. Don't forget to read Paul Grahams Chapter's 1 & 8 in On Lisp, from
now on I think it is pointless not to be debating what he said, vs what
we are saying. The whole book is almost dedicated to macros. From the
preface:

"The title [On Lisp] is intended to stress the importance of bottom-up
programming in Lisp. Instead of just writing your program in Lisp, you
can write your own language on Lisp, and write your program in that.

"It is possible to write programs bottom-up in any language, but Lisp is
the most natural vehicle for this style of programming. In Lisp,
bottom-up design is not a special technique reserved for unusually large
or difficult programs. Any substantial program will be written partly in
this style. Lisp was meant from the start to be an extensible language.
The language itself is mostly a collection of Lisp functions, no
different from the ones you define yourself. What?s more, Lisp functions
can be expressed as lists, which are Lisp data structures. This means
you can write Lisp functions which generate Lisp code.

"A good Lisp programmer must know how to take advantage of this
possibility. The usual way to do so is by defining a kind of operator
called a macro. Mastering macros is one of the most important steps in
moving from writing correct Lisp programs to writing beautiful ones.
Introductory Lisp books have room for no more than a quick overview of
macros: an explanation of what macros are,together with a few examples
which hint at the strange and wonderful things you can do with them.

"Those strange and wonderful things will receive special attention here.
One of the aims of this book is to collect in one place all that people
have till now had to learn from experience about macros."

Alex, have you read On Lisp?
--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
Alex Martelli
2003-10-10 17:33:04 UTC
Permalink
Kenny Tilton wrote:
...
But methinks a number of folks using Emacs Elisp and Autocad's embedded
Lisp are non-professionals.
Methinks there are a great many more people using the VBA
interface to AutoCAD than its Lisp interface. In fact, my friends
(ex-Autodesk) told me that's the case.
Sheesh, who hasn't been exposed to basic? From my generation, that is.
:) But no matter, the point is anyone can handled parens if they try for
more than an hour.
Yes, but will that make them most happy or productive? The Autocad
case appears to be relevant, though obviously only Autodesk knows
for sure. When I was working in the mechanical CAD field, I had
occasion to speak with many Autocad users -- typically mechanical
drafters, or mechanical or civil engineers, by training and main working
experience -- who HAD painfully (by their tales) learned to "handle
parens", because their work required them occasionally to write Autocad
macros and once upon a time Autolisp was the only practical way to do it --
BUT had jumped ship gleefully to the VBA interface, happily ditching
years of Autolisp experience, just as soon as they possibly could (or
earlier, i.e. when the VBA thingy was very new and still creaky in its
integration with the rest of Autocad -- they'd rather brave the bugs
of the creaky new VBA thingy than stay with the Autolisp devil they
knew). I don't know if syntax was the main determinant. I do know
that quite a few of those people had NOT had any previous exposure to
any kind of Basic -- we're talking about mechanics-junkies, more likely
to spend their spare time hot-rodding their cars at home (Bologna is,
after all, about 20 Km from Ferrari, 20 Km on the other side from
Minardi, while the Ducati motorcycle factory is right here in town,
etc -- *serious* mechanics-freaks country!), rather than playing with
the early home computers, or program for fun.

So, I think Autocad does prove that non-professional programmers
(mechanical designers needing to get their designs done faster) CAN
learn to handle lisp if no alternatives are available -- and also
that they'd rather not do so, if any alternatives are offered. (I
don't know how good a lisp Autolisp is, anyway -- so, as I mentioned,
there may well be NON-syntactical reasons for those guys' dislike
of it despite years of necessarily using it as the only tool with
which they could get their real job done -- but I have no data that
could lead me to rule out syntax as a factor, at least for users
who were OCCASIONAL users anyway, as programming never was their
REAL, MAIN job, just means to an end).
You (Alex?) also worry about groups of programmers and whether what is
good for the gurus will be good for the lesser lights.
If you ever hear me call anyone who is not an expert programmer
a "lesser light" then I give you -- or anyone else here -- permission
to smack me cross-side the head.
Boy, you sure can read a lot into a casually chosen cliche. But can we
clear up once and for all whether these genius scientists are or are not
as good a programmer as you? I thought I heard Python being recommended
as better for non-professional programmers.
Dunno 'bout Andrew, but -- if the scientists (or their employers) are
paying Andrew for programming consultancy, training, and advice, would
it not seem likely that they consider that he's better at those tasks
than they are...? Otherwise why would they bother? Most likely the
scientists are better than him at _other_ intellectual pursuits -- be
it for reasons of nature, nurture, or whatever, need not interest us
here, but it IS a fact that some people are better at some tasks.
There is too much programming to be done, to let ONLY professional
programmers do it -- just like there's too much driving to be done, to
let only professional drivers do it -- still, the professionals can be
expected to be better at their tasks of specialistic expertise.


Alex
Erann Gat
2003-10-13 02:04:09 UTC
Permalink
In article <ue0ib.267972$R32.8718052 at news2.tin.it>, Alex Martelli
Let's start with that WITH-CONDITION-MAINTAINED example of Gat. Remember
it? OK, now, since you don't appear to think it was an idiotic example,
then SHOW me how it takes the code for the condition it is to maintain and
the (obviously very complicated: starting a reactor, operating the reactor,
stopping the reactor -- these three primitives in this sequence) program
over which it is to maintain it, and how does it modify that code to ensure
this purpose. Surely, given its perfectly general name, that macro does not
contain, in itself, any model of the reactor; so it must somehow infer it
(guess it?) from the innards of the code it's analyzing and modifying.
It is not necessary to exhibit a theory of how WITH-CONDITION-MAINTAINED
actually works to understand that if one had such a theory one can package
that theory for use more attractively as a macro than as a function. It
is not impossible to package up this functionality as a function, but it's
very awkward. Control constructs exist in programming languages for a
reason, despite the fact that none of them are really "necessary". For
example, we can dispense with IF statements and replace them with a purely
functional IF construct that takes closures as arguments. Or we can do
things the Java way and create a new Conditional object or some such
thing. But it's more convenient to write an IF statement.

The claim that macros are useful is nothing more and nothing less than the
claim that the set of useful control constructs is not closed. You can
believe that or not. To me it is self-evidently true, but I don't know
how to convince someone that it's true who doesn't already believe it.
It's rather like arguing over whether the Standard Model of Physics covers
all the useful cases. There's no way to know until someone stumbles
across a useful case that the Standard Model doesn't cover.
For example, the fact that Gat himself says that if what I want to write
are normal applications, macros are not for me: only for those who want
to push the boundaries of the possible are they worthwhile. Do you think
THAT is idiotic, or wise? Please explain either the reason of the drastic
disagreements in your camp, or why most of you do keep trying pushing
macros (and lisp in general) at those of us who are NOT particularly
interested in "living on the edge" and running big risks for their own sake,
accordingly to your answer to the preceding question, thanks.
I can't speak for anyone but myself of course, but IMO nothing worthwhile
is free of risks. I also think you overstate the magnitude of the risk.
You paint nightmare scenarios of people "changing the language"
willy-nilly in all sorts of divergent ways, but 1) in practice on a large
project people tend not to do that and 2) Lisp provides mechanisms for
isolating changes to the language and limiting the scope of their effect.
So while the possibility exists that someone will change the language in a
radical way, in practice this is not really a large risk. The risk of
memory corruption in C is vastly larger than the risk of "language
corruption" in Lisp, and most people seem to take that in stride.
...and there's another who has just answered in the EXACTLY opposite
way -- that OF COURSE macros can do more than HOF's. So, collectively
speaking, you guys don't even KNOW whether those macros you love so
much are really necessary to do other things than non-macro HOFs allow
(qualification inserted to try to divert the silly objection, already made
by others on your side, that macros _are_ functions), or just pretty things
up a little bit.
But all any high level language does is "pretty things up a bit". There's
nothing you can do in any language that can't be done in machine
language. "Prettying things up a bit" is the whole point. Denigrating
"prettying things up a bit" is like denigrating cars because you can get
from here to there just as well by walking, and all the car does is "speed
things up a bit".

E.
Hans Nowak
2003-10-12 00:42:57 UTC
Permalink
On Wed, 08 Oct 2003 18:28:36 +1300, "Greg Ewing (using news.cis.dfn.de)"
Hmm, if I recall correctly, in Latin the plural of 'virus' is 'virus'.
Actually, the last discussion of this that I saw (can't remember where)
came to the conclusion that the word 'virus' didn't *have* a plural
in Latin at all, because its original meaning didn't refer to something
countable.
'virus' (slime, poison, venom) is a 2nd declension neuter noun and
technically does have a plural 'viri'.
Doesn't it belong to the group that includes 'fructus'? Of course this has
nothing to do with the plural used in English, but still... :-)

This page, which has a lot of info on this issue, seems to think so:

http://www.perl.com/language/misc/virus.html
--
Hans (hans at zephyrfalcon.org)
http://zephyrfalcon.org/
David Rush
2003-10-06 20:49:08 UTC
Permalink
Guido's generally adamant stance for simplicity has been the
key determinant in the evolution of Python.
Simplicity is good. I'm just finding it harder to believe that Guido's
perception of simplicity is accurate.
Anybody who doesn't value simplicity and uniformity is quite
unlikely to be comfortable with Python
I would say that one of the reasons why I program in Scheme is *because* I
value simplicity and uniformity. The way that Python has been described in
this discussion make me think that I would really
*hate* Python for it's unecessary complications if I went back to it.
And I have spent years admiring Python from afar. The only reason I
didn't adopt it years ago was that it was lagging behind the releases
of Tk which I needed for my cross-platform aspirations. At the time, I
actually enjoyed programming in Python as a cheaper form of Smalltalk
(literally, Unix Smalltalk environments were going for $4000/seat).

Probably the most I can say now is that I think that Python's syntax is
unecessarily reviled (and there are a *lot* of people who think that
Python's syntax is *horrible* - I am not one of them mind you), in
much the same way that s-expressions are a stumbling block for programmers
from infix-punctuation language communities.

david rush
--
(\x.(x x) \x.(x x)) -> (s i i (s i i))
-- aki helin (on comp.lang.scheme)
Alex Martelli
2003-10-10 09:12:12 UTC
Permalink
Pick the one Common Lisp implementation that provides the stuff you
need. If no Common Lisp implementation provides all the stuff you
need, write your own libraries or pick a different language. It's as
simple as that.
Coming from a C/C++ background, I'm surprised by this attitude. Is
portability of code across different language implementations not a
priority for LISP programmers?
Libraries distributed as binaries are not portable across different C++
implementations on the same machine (as a rule).
This isn't true anymore (IE for newer compilers).
Wow, that IS great news! Does it apply to 32-bit Intel-oid machines (the
most widespread architecture) and the newest releases of MS VC++ (7.1)
and gcc, the most widespread compilers for it? I can't find any docs on what
switches or whatever I should give the two compilers to get seamless interop.

Specifically, the standard Python on Windows has long been built with MSVC++
and this has given problems to C-coded extension writers who don't own that
product -- it IS possible to use other compilers to build the extensions, but
only with much pain and some limitations (e.g on FILE* arguments). If this
has now gone away there would be much rejoicing -- with proper docs on the
Python side of things and/or use of whatever switches are needed to enable
this, if any, when we do the standard Python build on Windows.
Mangling, exception handling, etc, is all covered by the ABI.
IBM's XLC 6.0 for OSX also follows this C++ ABI, and is thus compatible
with G++ 3.x on OSX.
I'm not very familiar with Python on the Mac but I think it uses another
commercial compiler (perhaps Metrowerks?), so I suspect the same question
may apply here. It's not as crucial on other architectures where Python is
more normally built with free compilers, but it sure WOULD still be nice to
think of possible use of costly commercial compilers with hypothetically
great optimizations for the distribution of some "hotspot" object files, if
that sped the interpreter up without giving any interoperability problems.


Alex
Vis Mike
2003-10-17 19:48:47 UTC
Permalink
"Luke Gorrie" <luke at bluetail.com> wrote in message
Anonymous functions *can* be more clear than any name. Either because
they are short and simple, because it is hard to come up with a good
name, and/or becuase they are ambigous.
Say I want to attach an index to elements of a list. I could write
integers = [1..]
attach_index ls = zip integers ls
or just
attach_index ls = zip [1..] ls
If we're arguing to eliminate names that don't say very much, then
attach_index = zip [1..]
I think dynamic scoping within blocks really trims down on duplication and
make the code easier to read. For example:

employees sort: [ | a b | a date < b date ]

A lot of typing for a simple concept:

employees sort: [ < data ]

I'm not against too much typing to be clear, but rather too much typping
that makes the concept unclear.

-- Mike
Whether you want to give an explicit name to the list of integers is
not given. If the indexed list is local, it is better to use the
definition directly; I don't want to look up the definition of
integers (perhaps in a different module) to check whether it is [1..]
or [0..].
And for the exact same reason you might like to just write "zip [1..]"
instead of using a separate "attach_index" function.
Cheers,
Luke
Raffael Cavallaro
2003-10-16 02:50:28 UTC
Permalink
In article <9d140b81.0310151301.4b811dfa at posting.google.com>,
The reason people are attacking your posts is because the above has
NOTHING to do with anonymous functions. This advice should be
followed independent of anonymous functions.
You're misreading those who have argued against me. They seem to think
that this advice should _not_ be followed in the case of anonymous
functions. I.e., the anonymous function camp seem to think that
anonymous functions are such a wonderfully expressive tool that they are
more clear than _actual desriptive function names_.

I agree with you; this advice should be followed, period (well, it is
_my_ advice, after all). But advocates of a particular functional style
think it is perfectly alright to use the same unnamed functional idiom
over and over throughout source code, because functional abstractions
are so wonderfully expressive. They think it is just fine to expose the
implementation details in code locations where it is completely
unnecessary to know the implementation specifics.

How else can this be construed?

In article <pan.2003.10.14.23.19.11.733879 at knm.org.pl>,
A name is an extra level of indirection. You must follow it to be
100% sure what the function means, or to understand what does it really
mean that it does what it's named after. The code also gets longer - not
only more verbose but the structure of the code gets more complex with
more interdependent parts. When you have lots of short functions, it's
harder to find them. There are many names to invent for the writer and
many names to rememner for a reader. Function headers are administrative
stuff, it's harder to find real code among abstractions being introduced
and used.
In other words, don't use names _at all_ if you can avoid them. Just
long, rambling tracts of code filled with anonymous functions. After
all, you'd just have to go look at the named function bodes anyway, just
to be _sure_ they did what they said they were doing. (I wonder if he
disassembles OS calls too, you know, just to be sure).

Really I personally think they are often just enamored of their own
functional l33tness, but you'll always have a hard time convincing
programmers that their unstated preference is to write something so
dense that only they and others equally gifted in code decipherment can
grasp it. Many programmers take it badly when you tell them that their
job should be much more about communicating intent to other human beings
than about being extremely clever. As a result of this clever agenda, an
unmaintainably large proportion of the code that has ever been written
is way too clever for its own good.
Raffael Cavallaro
2003-10-15 00:41:28 UTC
Permalink
In article <pan.2003.10.14.23.19.11.733879 at knm.org.pl>,
Sometimes a function is so simple that its body is more clear than any
name. A name is an extra level of indirection. You must follow it to be
100% sure what the function means, or to understand what does it really
mean that it does what it's named after.
Your argument is based on the assumption that whenever people express
_what_ a function does, they do so badly, with an inappropriate name.

We should choose our mode of expression based on how things work when
used correctly, not based on what might happen when used foolishly. We
don't write novels based on how they might be misread by the
semi-litterate.

Anonymous functions add no clarity except to our understaning of
_implementation_, i.e., _how_ not _what_. Higher level abstractions
should express _what_. Implementation details should remain separate,
both for clarity of exposition, and for maintanence and change of
implementation.
The code also gets longer
No, it gets shorter, because you don't repeat your use of the same
abstraction over and over. You define it once, then reference it by name
everywhere you use it.
- not
only more verbose but the structure of the code gets more complex with
more interdependent parts.
No, there are _fewer_ interdependent parts, because the parts that
correspond to the anonymous function bodies are _completely gone_ from
the main exposition of what is happening. These formerly anonymous
function bodies are now elswhere, where they will only be consulted when
it is necessary to modify them.

You seem to take the view that client code can't trust the interfaces it
uses, that you have to see how things are implemented to make sure they
do what they represent to do.

This is a very counterproductive attitude. Code should provide high
level abstractions, and clients of this code should be able to tell what
it does just by looking at the interface, and maybe a line of
documentation. It shouldn't be necessary to understand the internals of
code one uses just to use it. And it certainly isn't necessary to
include the internals of code one uses where one is using it (i.e.,
anonymous functions). That's what named functions and macros are for.
Inlining should be done by compilers, not programmers.

Anonymous functions are a form of unnecessary information overload. If I
don't need to see how something works right here, in this particular
context, then don't put its implementation here. Just refer to it by
name.
When you have lots of short functions, it's
harder to find them. There are many names to invent for the writer and
many names to rememner for a reader.
Which is why names should be descriptive. Then, there's little to
remember. I don't need to remember what add-offset does, nor look up
it's definition, to understand its use in some client code. Anonymous
functions are sometimes used as a crutch by those who can't be bothered
to take the time to attend to the social and communicative aspects of
programming. Ditto for overly long function bodies. How to express
intent to human readers is just as important as getting the compiler to
do what you want. These same programmers seem enamored of
crack-smokingly short and cryptic identifier and function names, as if
typing 10 characters instead of 3 is the real bottleneck in modern
software development. (Don't these people know how to touch type?)
Function headers are administrative
stuff, it's harder to find real code among abstractions being introduced
and used.
Seemingly to you, the only "real code" is low level implementation. In
any non-trivial software, however, the "real code" is the interplay of
high level abstractions. At each level, we craft a _what_ from some less
abstract _how_, and the _what_ we have just defined, is, in turn used as
part of the _how_ for an even higher level of abstraction or
functionality.
Why do you insist on naming *functions*? You could equally well say that
every list should be named, so you would see its purpose rather than its
contents.
I think this concept is called variable bindings ;^)
Perhaps every number should be named, so you can see what it
represents rather than its value.
Actually, they are _already_ named. The numerals we use _are_ names, not
numbers themselves. I'm surprised you aren't advocating the use of
Church Numerals for all numerical calculation.
You could say that each statement of
a compound statement should be moved to a separate function, so you can
see what it does by its name, not how it does it by its contents. It's
all equally absurd.
In the Smalltalk community the rule of thumb is that if a method body
gets to be more than a few lines, you've failed to break it down into
smaller abstractions (i.e., methods).
A program should balance named and unnamed objects. Both are useful,
there is a continuum between cases where one or the other is more clear
and it's subjective in border cases, but there is place for unnamed
functions - they are not that special. Most high level languages have
anonymous functions for a reason.
Yes, but they should live inside the bodies of named functions. Not lie
exposed in the middle of higher level abstractions. Please also see my
reply to Joe Marshall/Prunesquallor a few posts up in this thread.
Raffael Cavallaro
2003-10-17 12:03:49 UTC
Permalink
In article <3f8e1312 at news.sentex.net>,
map is an abstraction that
specifies that you want to apply a certain operation to each element
of a collection, with adding the offset being the desired operation.
Sorry to reply so late - I didn't see this post.

In the context in which the offsets are added, it isn't necessary to
know that the offsets are added using map, as opposed, for example, to
an interative construct, such as loop. Since we don't need to know _how_
the adding of offsets is done at every location in the source where we
might want to add an offset to the elements of a list or vector, we
should _name_ this bit of functionality. This is an even bigger win
should we ever decide to switch from map to loop, or do, or dolist, etc.
Then we write a single change, not one for every time we use this bit of
functionality.
Jon S. Anthony
2003-10-13 16:48:18 UTC
Permalink
At the moment the only thing I am willing to denounce as idiotic are
your clueless rants.
Excellent! I interpret the old saying "you can judge a man by the quality
of his enemies" differently than most do: I'm _overjoyed_ that my enemies
are the scum of the earth, and you, sir [to use the word loosely], look as
if you're fully qualified to join that self-selected company.
Whatever.

/Jon
Andrew Dalke
2003-10-11 17:27:02 UTC
Permalink
What do I want of the OS running my firewall? Security, security,
security.
It's nice if it can run on very scant resources, offers solid and usable
packet filtering, and has good, secure device drivers available for the
kind of devices I may want in a computer dedicated to firewalling -- all
sorts of ethernet cards, wifi thingies, pppoe and plain old ppp on ISDN
for
emergency fallback if cable service goes down, UPS boxes, a serial console
of course, perhaps various storage devices, and that's about it.
I see no reason why I should use anything but OpenBSD for that.
I'm running a Linksys box as my primary firewall. I like the feeling of
security that the OS is in firmware and can't be updated (I hope) except
through the USB connector. I like that the box is portable (I've taken
it to a couple of conferences), low power (I leave it on all the time), and
quiet.

I do have a network -> dialup box as well when I needed to do dialup
to one of my clients, but I've not needed that feature as a backup
to my DSL in over a year.
(DHCP, DNS, ntp, Squid for proxying, ...).
It does DHCP but not the rest. Would be nice, but I would prefer
those features to be on this side of my firewall. Yes, I know about
OpenBSD's "only one remote hold in the default install, in more
than 7 years" claim to fame.

Andrew
dalke at dalkescientific.com

But is it a sense of security or real security? Hmm.... :)
Alex Martelli
2003-10-12 18:34:39 UTC
Permalink
Pascal Costanza wrote:
...
Does Python allow local function definitions?
...
Can they shadow predefined functions?
...
Yes, named objects, including functions can (locally) shadow
(override) builtins. It is considered a bad habit/practice unless
done intentionally with a functional reason.
Well, this proves that Python has a language feature that is as
dangerous as many people seem to think macros are.
Indeed, a chorus of "don't do that" is the typical comment each
and every time a newbie falls into that particular mis-use. Currently,
the --shadow option of PyChecker only warns about shadowing of
_variables_, not shadowing of _functions_, but there's really no
reason why it shouldn't warn about both. Logilab's pylint does
diagnose "redefining built-in" with a warning (I think they mean
_shadowing_, not actually _redefining_, but this may be an issue
of preferred usage of terms).

"Nailing down" built-ins (at first with a built-in warning for overriding
them, later in stronger ways -- slowly and gradually, like always, to
maintain backwards compatibility and allow slow, gradual migration of the
large existing codebase) is under active consideration for the next version
of Python, expected (roughly -- no firm plans yet) in early 2005.

So, yes, Python is not perfect today (or else, we wouldn't be planning a
2.4 release...:-). While it never went out of its way to give the user "as
much rope as needed to shoot oneself in the foot", neither did it ever
spend enormous energy in trying to help the user avoid many possible errors
and dubious usage. Such tools as PyChecker and pylint are a start, and
some of their functionality should eventually be folded back into the
core, just as tabnanny's was in the past with the -t switch. I don't think
the fundamental Python will ever nag you for missing comments or
docstrings, too-short names, etc, the way pylint does by default (at
least, I sure hope not...!-), but there's quite a bit I _would_ like to have
it do in terms of warnings and, eventually, error messages for
"feechurs" that only exist because it was once simple to allow than
to forbid them, not by a deliberate design decision to have them there.

Note that SOME built-ins exist SPECIFICALLY for the purpose of
letting you override them. Consider, for example, __import__ -- this
built-in function just exposes the inner mechanics of the import
statement (and friends) to let you get modules from some other
place (e.g., when your program must run off a relational database
rather than off a filesystem). In other word, it's a rudimentary hook
in a "Template Method" design pattern (it's also occasionally handy
to let you import a module whose name is in a string, without
going to the bother of an 'exec', so it will surely stay for that purpose
even though we now have a shiny brand-new architecture for
import hooks -- but that's another story). Having a single hook of
global effect has all the usual downsides, of course (which is exactly
why we DO have that new architecture;-): two or more complicated
packages doing import-hooks can't necessarily coexist within the
same Python application program (the only saving grace which let
us live with that simplistic hook for so many years is that importing
from strange places is typically a need of a certain deployment of
an overall application, not of a package -- still, such packages DO
exist, so the previous solution was far from perfect).

Anyway, back to your contention: I do not think that the fact that
the user can, within his functions, choose very debatable names,
such as those which shadow built-ins, is anywhere as powerful,
and therefore as dangerous, as macros. My own functions using
'sum' will get the built-in one even if yours do weird things with
that same name as a local variable of their own. The downsides
of shadowing are essentially as follows...

a newbie posts some fragment of his code asking for guidance,
and among other things that fragment has
for i in range(length(thenumbers)):
total = total + thenumbers[i]
he will receive many suggestions on how to make it better,
including the ideal one:
total = sum(thenumbers, total)
But then he tries it out and reports "it breaks" (newbies rarely
are clueful enough to just copy and paste error messages). And
we all waste lots of time finding out that this is because... the
hapless newbie had named HIS OWN FUNCTION 'sum', so
this was causing runaway recursion. Having met similar issues
over and over, one starts to warn newbies against shadowing
and get sympathetic with the idea of forbidding it:-).

That doesn't really compare to an extra feature in the language
that is deliberately designed to let reasonably-clueful users do
their thing, isn't deprecated nor warned against by anybody at
all (with a few isolated voices speaking about "abuse" of macros
in this thread, but still with an appreciation for macros when
_well_ used), and is MEANT to do what newbies _accidentally_
do with shadowing & much more besides;-).


Alex
Ng Pheng Siong
2003-10-08 05:47:18 UTC
Permalink
Well, I would say that kanji is badly designed, compared to latin
alphabet. The voyels are composed with consones (with diacritical
marks) and consones are written following four or five groups with
additional diacritical marks to distinguish within the groups. It's
more a phonetic code than a true alphabet.
Kanji are ideograms borrowed from Chinese. Kanji literally means "Han
character".

I think the diacritical marks you mention are pronunciation guides, much
like Hanyu Pinyin is a Mandarin pronunciation guide for Chinese.

In Hanyu Pinyin, Kanji (read as a Chinese word phrase) is rendered "han4
zi4".

In Korean, Kanji is pronounced Hanja.

Same two-character word phrase, different pronunciations.
--
Ng Pheng Siong <ngps at netmemetic.com>

http://firewall.rulemaker.net -+- Manage Your Firewall Rulebase Changes
http://sandbox.rulemaker.net/ngps -+- Open Source Python Crypto & SSL
Alexander Schmolck
2003-10-12 17:25:04 UTC
Permalink
The smartest people I know aren't programmers. What does
that say?
I think this is vital point. CL's inaccessibility is painted as a feature of
CL by many c.l.l denizens (keeps the unwashed masses out), but IMO the CL
community stunts and starves itself intellectually big time because CL is (I
strongly suspect) an *extremely* unattractive language for smart people
(unless they happen to be computer geeks).

Apart from the fact that this yields a positive feedback loop, I'd think that
even the smart computer geeks are likely to suffer from this incestuousness in
the midrun.

'as
David Eppstein
2003-10-14 15:33:02 UTC
Permalink
In article <tyf1xtf50a5.fsf at pcepsft001.cern.ch>,
However, please _do_ tell me if you hear of anyone implementing Python
in Lisp[*].
Having Python as a front-end to Lisp[*] (as it is now a front-end to
C, C++ and Java) would be very interesting indeed.
It is now also a front-end to Objective C, via the PyObjC project.
--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
Jon S. Anthony
2003-10-09 20:26:50 UTC
Permalink
Ahhh, so make the language easier for computers to understand and
harder for intelligent users to use? ;)
Spoken like a true Python supporter...

/Jon
Andrew Dalke
2003-10-09 19:52:47 UTC
Permalink
Note that I did not at all make reference to macros. Your statements
to date suggest that your answer to the first is "no."
That's not exactly my position, rather my position is that just about
anything can and will be abused in some way shape or fashion. It's a
simple fact of working in teams. However I would rather err on the side
of abstractability and re-usability than on the side of forced
restrictions.

You are correct. I misremembered "Tolton" as "Tilton" and confused
you with someone else. *blush*

My answer, btw, that the macro preprocessor in C is something
which is useful and too easily prone to misuse. Eg, my original
C book was "C for native speakers of Pascal" and included in
the first section a set of macros like

#define BEGIN {
#define END }

It's not possible to get rid of cpp for C because the language
is too weak, but it is something which takes hard experience to
learn when not to use.

As for a language feature which should never be used. Alex Martelli
gave an example of changing the default definition for == between
floats, which broke other packages, and my favorite is "OPTION
BASE 1" in BASIC or its equivalent in Perl and other langauges.
That is, on a per-program (or even per-module) basis, redefine
the 0 point offset for an array.

Andrew
dalke at dalkescientific.com
Alexander Schmolck
2003-10-07 20:25:53 UTC
Permalink
(I'm ignoring the followup-to because I don't read comp.lang.python)
Well, I supposed this thread has spiralled out of control already anyway:)
Indentation-based grouping introduces a context-sensitive element into
the grammar at a very fundamental level. Although conceptually a
block is indented relative to the containing block, the reality of the
situation is that the lines in the file are indented relative to the
left margin. So every line in a block doesn't encode just its depth
relative to the immediately surrounding context, but its absolute
depth relative to the global context.
I really don't understand why this is a problem, since its trivial to
transform python's 'globally context' dependent indentation block structure
markup into into C/Pascal-style delimiter pair block structure markup.

Significantly, AFAICT you can easily do this unambiguously and *locally*, for
example your editor can trivially perform this operation on cutting a piece of
python code and its inverse on pasting (so that you only cut-and-paste the
'local' indentation). Prima facie I don't see how you loose any fine control.
Additionally, each line encodes this information independently of the other
lines that logically belong with it, and we all know that when some data is
encoded in one place may be wrong, but it is never inconsistent.
Sorry, I don't understand this sentence, but maybe you mean that the potential
inconsitency between human and machine interpretation is a *feature* for Lisp,
C, Pascal etc!? If so I'm really puzzled.
There is yet one more problem. The various levels of indentation encode
different things: the first level might indicate that it is part of a
function definition, the second that it is part of a FOR loop, etc. So on
any line, the leading whitespace may indicate all sorts of context-relevant
information.
I don't understand why this is any different to e.g. ')))))' in Lisp. The
closing ')' for DEFUN just looks the same as that for IF.
Yet the visual representation is not only identical between all of these, it
cannot even be displayed.
I don't understand what you mean. Could you maybe give a concrete example of
the information that can't be displayed? AFAICT you can have 'sexp'-movement,
markup and highlighting commands all the same with whitespace delimited block
structure.
Is this worse than C, Pascal, etc.? I don't know.
I'm pretty near certain it is better: In Pascal, C etc. by and large block
structure delimitation is regulated in such a way that what has positive
information content for the human reader/programmer (indentation) has zero to
negative information content for the compiler and vice versa. This is a
remarkably bad design (and apart from cognitive overhead obviously also causes
errors).

Python removes this significant problem, at as far as I'm aware no real cost
and plenty of additional gain (less visual clutter, no waste of delimiter
characters ('{','}') or introduction of keywords that will be sorely missed as
user-definable names ('begin', 'end')).

In Lisp the situtation isn't quite as bad, because although most of the parens
are of course mere noise to a human reader, not all of them are and because of
lisp's simple but malleable syntactic structure a straighforward replacement
of parens with indendation would obviously result in unreadable code
(fragmented over countless lines and mostly in past the 80th column :).

So unlike C and Pascal where a fix would be relatively easy, you would need
some more complicated scheme in the case of Lisp and I'm not at all sure it
would be worth the hassle (especiallly given that efforts in other areas would
likely yield much higher gains).

Still, I'm sure you're familiar with the following quote (with which I most
heartily agree):

"[P]rograms must be written for people to read, and only incidentally for
machines to execute."

People can't "read" '))))))))'.
Worse than Lisp, Forth, or Smalltalk? Yes.
Possibly, but certainly not due to the use of significant whitespace.


'as
Alexander Schmolck
2003-10-15 17:22:49 UTC
Permalink
I don't know why you categorize it as idiocy.
I, having no experience whatsoever with Python
'as
Marco Antoniotti
2003-10-10 16:33:31 UTC
Permalink
Ok. At this point I feel the need to apoligize to everybody for my
rants and I promise I will do my best to end this thread.

I therefor utter the H-word and hopefully cause this thread to stop.

Cheers
--
marco
Paolo Amoroso
2003-10-13 17:03:18 UTC
Permalink
[following up to comp.lang.python and comp.lang.lisp]
necessarily yield optimal productivity in programming. What
language design trade-offs WILL yield such optimal productivity,
*DEPENDING* on the populations and tasks involved, is the crux
of this useless and wearisome debate (wearisome and useless, in
good part, because many, all it seems to me on the lisp side,
appear to refuse to admit that there ARE trade-offs and such
dependencies on tasks and populations).
I agree (Lisp side).
The only example of 'power' I've seen (besides the infamous
with-condition-maintained example) are such trifles as debugging-
output macros that may get compiled out when a global flag to
disable debugging is set -- exactly the kind of micro-optimization
If you are interested in advanced uses of Common Lisp macros, you may
have a look at Paul Graham's book "On Lisp", which is available for
download at his site. Other examples are probably in the Screamer
system by Jeffrey Mark Siskind.


Paolo
--
Paolo Amoroso <amoroso at mclink.it>
Jeremy H. Brown
2003-10-03 17:53:05 UTC
Permalink
This is actually a pretty good list. I'm not commenting on
...
Implementations >10 ~4
===========================================^
See http://alu.cliki.net/Implementation - it lists 9 commercial
implementations, and 7 opensource implementations. There are
probably more.
Thanks. I hadn't realized the spread was that large.
Performance "worse" "better"
Standards IEEE ANSI
Reference name R5RS CLTL2
============================================^
No, CLtL2 is definitely _not_ a reference for ANSI Common Lisp.
It was a snapshot taken in the middle of the ANSI process, and
is out of date in several areas. References which are much closer
to the ANSI spec can be found online at
http://www.franz.com/support/documentation/6.2/ansicl/ansicl.htm
or
http://www.lispworks.com/reference/HyperSpec/Front/index.htm
Thanks again.
Try them both, see which one works for you in what you're doing.
Agreed, but of course, I'd recommend CL :-)
I've arrived at the conclusion that it depends both on your task/goal
and your personal inclinations.

Jeremy
Mark Brady
2003-10-03 18:50:22 UTC
Permalink
"Mark Brady" <kalath at lycos.com> wrote in message
This whole thread is a bad idea.
I could agree that the OP's suggestion is a bad idea but do you
actually think that discussion and more publicity here for Lisp/Scheme
is bad? You make a pretty good pitch below for more Python=>Lisp
converts.
You are right of course, however I dislike cross posting and I also
dislike blatantly arguing with people over language choice. I would
prefer to lead by example. I think one good program is worth a
thousand words. For example people listen to Paul Graham
(http://www.paulgraham.com/avg.html) when he advocates Common Lisp
because he wrote Viaweb using it and made a fortune thanks to Lisp's
features (details in the link).
If you like python then use python.
As I plan to do.
Nothing wrong with that. Most people on these groups would agree that
Python is a very good choice for a wide range of software projects and
it is getting better with every release.

I think that if you can get over S-exps then Scheme and Common Lisp
feel very like python. I would recommend Pythonistas to at least
experiment with Common Lisp or Scheme even if you are perfectly happy
with Python. After all you have nothing to lose. If you don't like it
then fine you always have Python and you've probably learned something
and if you do like it then you have another language or two under your
belt.
Personally I find Scheme and Common Lisp easier to read but that's
just me, I prefer S-exps and there seems to be a rebirth in the
cheme
and Common Lisp communities at the moment. Ironically this seems to
have been helped by python. I learned python then got interested in
it's functional side and ended up learning Scheme and Common Lisp. A
lot of new Scheme and Common Lisp developers I talk to followed the
same route. Python is a great language and I still use it for some
things.
Other Lispers posting here have gone to pains to state that Scheme is
not a dialect of Lisp but a separate Lisp-like language. Could you
give a short listing of the current main differences (S vs. CL)? If I
were to decide to expand my knowledge be exploring the current
versions of one(I've read the original SICP and LISP books), on what
basis might I make a choice?
Terry J. Reedy
This is a difficult question to answer. It's a bit like trying to
explain the differences between Ruby and Python to a Java developer
;-)

*Personally* I find it best to think of Scheme and Common Lisp as two
different but very closely related languages. The actual languages and
communities are quite different.

Common Lisp is a large, very pragmatic, industrial strength language
and its community reflects this. Common Lisp has loads of features
that you would normally only get in add on libraries built right into
the language, it's object
system "CLOS" has to be experienced to be believed and its macro
system is stunning. Some very smart people have already put years of
effort into making it capable of great things such as Nasa's award
winning remote agent software
(http://ic.arc.nasa.gov/projects/remote-agent/).

Scheme is a more functional language and unlike Common Lisp is has a
single namespace for functions and variables (Python is like Scheme in
this regard). Common Lisp can be just as functional but on the whole
the Scheme community seem to embrace functional programming to a
greater extend.

Scheme is like python in that the actual language is quite small and
uses libraries for many of the same tasks Python would use them for,
unlike Common Lisp that has many of these features built into the
language. It also has a great but slightly different macro system
although every implementation I know also has Common Lisp style
Macros.

Scheme doesn't have a standard object system (it's more functional)
but has libraries to provide object systems. This is very hard to
explain to python developers, scheme is kind of like a big python
metaclass engine where different object systems can be used at will.
It's better than I can describe and it is really like a more powerful
version of Pythons metaclass system.

Pythonistas who love functional programming may prefer Scheme to
Common Lisp while Pythonistas who want a standard amazing object
system and loads of built in power in their language may prefer Common
Lisp.

To be honest the these tutorials will do a far better job than I
could:

For Scheme get DrScheme:
http://www.drscheme.org/

and go to

'Teach yourself scheme in fixnum days' :
http://www.ccs.neu.edu/home/dorai/t-y-scheme/t-y-scheme.html


For Common Lisp get the trial version of Lispworks:
http://www.lispworks.com/

and go get Mark Watsons free web book:
http://www.markwatson.com/opencontent/lisp_lic.htm

Regards,
Mark.

Ps. If anyone spots a mistake in this mail please correct me, it will
have been an honest one and not an attempt to slander your favourite
language and I will be glad to be corrected, in other words there is
no need to flame me :)
Alex Martelli
2003-10-11 23:19:54 UTC
Permalink
Kenny Tilton wrote:
...
The very 'feature' that was touted by Erann Gat as macros' killer
advantage in the WITH-CONDITION-MAINTAINED example he posted is the
crucial difference: functions (HO or not) and classes only group some
existing code and data; macros can generate new code based on examining,
and presumably to some level *understanding*, a LOT of very deep things
about the code arguments they're given.
Stop, your scaring me. You mean to say there are macros out there whose
output/behavior I cannot predict? And I am using them in a context where
I need to know what the behavior will be? What is wrong with me? And
what sort of non-deterministic macros are these, that go out and make
their own conclusions about what I meant in some way not documeted?
Let's start with that WITH-CONDITION-MAINTAINED example of Gat. Remember
it? OK, now, since you don't appear to think it was an idiotic example,
then SHOW me how it takes the code for the condition it is to maintain and
the (obviously very complicated: starting a reactor, operating the reactor,
stopping the reactor -- these three primitives in this sequence) program
over which it is to maintain it, and how does it modify that code to ensure
this purpose. Surely, given its perfectly general name, that macro does not
contain, in itself, any model of the reactor; so it must somehow infer it
(guess it?) from the innards of the code it's analyzing and modifying.

Do you need to know what the behavior will be, when controlling a reactor?
Well, I sort of suspect you had better. So, unless you believe that Gat's
example was utterly idiotic, I think you can start explaining from right
there.
I think the objection to macros has at this point been painted into a
very small corner.
I drastically disagree. This is just one example, that was given by one of
the most vocal people from your side, and apparently not yet denounced
as idiotic, despite my asking so repeatedly about it, so presumably agreed
with by your side at large. So, I'm focusing on it until its import is
clarified. Once that is done, we can tackle the many other open issues.

For example, the fact that Gat himself says that if what I want to write
are normal applications, macros are not for me: only for those who want
to push the boundaries of the possible are they worthwhile. Do you think
THAT is idiotic, or wise? Please explain either the reason of the drastic
disagreements in your camp, or why most of you do keep trying pushing
macros (and lisp in general) at those of us who are NOT particularly
interested in "living on the edge" and running big risks for their own sake,
accordingly to your answer to the preceding question, thanks.

"Small corner"?! You MUST be kidding. Particularly given that so many
on your side don't read what I write, and that you guys answer the same
identical questions in completely opposite ways (see below for examples
of both), I don't, in fact, see how this stupid debate will ever end, except
by exhaustion. Meanwhile, "the objection to macros" has only grown
larger and larger with each idiocy I've seen spouted in macros' favour,
and with each mutual or self-contradiction among the macros' defenders.
...If all you do with your macros is what you could
do with HOF's, it's silly to have macros in addition to HOF's
There is one c.l.l. denizen/guru who agrees with you. I believe his
...and there's another who has just answered in the EXACTLY opposite
way -- that OF COURSE macros can do more than HOF's. So, collectively
speaking, you guys don't even KNOW whether those macros you love so
much are really necessary to do other things than non-macro HOFs allow
(qualification inserted to try to divert the silly objection, already made
by others on your side, that macros _are_ functions), or just pretty things
up a little bit. Would y'all mind coming to some consensus among you
experienced users of macros BEFORE coming to spout your wisdom over
to us poor benigthed non-lovers thereof, THANKYOUVERYMUCH...?
far from all). This is far from the first time I'm explaining this, btw.
Oh. OK, now that you mention it I have been skimming lately.
In this case, I think it was quite rude of you to claim I was not answering
questions, when you knew you were NOT READING what I wrote.


As you claim that macros are just for prettying things up, I will restate
(as you may not have read it) one of the many things I've said over and
over on this thread: I do not believe the minor advantage of prettying
things up is worth the complication, the facilitation of language
divergence between groups, and the deliberate introduction of multiple
equivalent ways to solve the same problem, which I guess you do know
I consider a bad thing, one that impacts productivity negatively.


Alex
Alex Martelli
2003-10-11 16:44:07 UTC
Permalink
prunesquallor at comcast.net wrote:
...
A single web browser?
I far prefer to have on my cellphone one that's specialized for its small
screen and puny cpu/memory, and on more powerful computers, more
powerful browsers. Don't you?
Cellphone? Why on earth would I want one of those?
Why would you want a cellphone, or why would you want a browser on it?
As for the former, many people I know first got their cellphones because
their parents were old and sick, and they wanted to be sure their parents
could immediately get in touch with them in case of need. Of course, not
knowing you at all, I can't tell whether you're an orphan, or have parents
who are young and healthy, or don't care a whit about their being able to
reach you -- whatever. Once you do have a cellphone for whatever reason,
why not get more use out of it? Checking whether reports for wherever
you're traveling to, etc, etc -- a browser's a decent way to do many such
things.
A single operating system?
On my cellphone all the way to the datacenter?
A cellphone with an OS?! The phone I use is implemented
with *wires*.
Forget cellphones then, and let's check if we DO want a single operating
system over a huge range of computers serving enormously different
tasks, as Microsoft is so keen to tell us, or not.

What do I want of the OS running my firewall? Security, security, security.
It's nice if it can run on very scant resources, offers solid and usable
packet filtering, and has good, secure device drivers available for the
kind of devices I may want in a computer dedicated to firewalling -- all
sorts of ethernet cards, wifi thingies, pppoe and plain old ppp on ISDN for
emergency fallback if cable service goes down, UPS boxes, a serial console
of course, perhaps various storage devices, and that's about it.

I see no reason why I should use anything but OpenBSD for that. Right now
I'm running it, for a LAN of half a dozen desktops and 2-5 laptops, on an
old scavenged Pentium-75, 32MB RAM, 1GB disk ISA machine, and it's got so
many resources to spare that I ended up running services aplently on it too
(DHCP, DNS, ntp, Squid for proxying, ...).

What do I want of the OS running my desktop? Resources are not a real
problem, since throwing CPU cycles, RAM, and disk at it is so dirt-cheap.
Some security, just enough to avoid most of the pesky worms and viruses
going around. Lots and LOTS of apps, and lots and LOTS of drivers for all
sort of cool devices for video, audio, and the like. Linux is pretty good
there, though I understand Windows can be useful sometimes (drivers
aren't yet available for _every_thing under Linux) even though security is
awful there, and MacOS/X would be cool was it not for HW cost. For a small
server? Resources should not be eaten up by the OS but available for
serving the rest of the LAN -- lots of free server-side apps & proxies --
security important, device drivers so-so -- I can see either Linux,
OpenBSD, or FreeBSD being chosen there.

A LARGE server, were I to need one? A Linux cluster, or an IBM mainframe
able to run Linux at need on virtual machines, sound better then.

A laptop? A palmtop? Linux may cover some of those (I do enjoy it on my
Zaurus) but is really too demanding for the cheapest palmtops -- and I
still can't get good sleep (as opposed to hybernate) from it on laptops
with ACPI.

What about the several computers in my car? They play very specialized
roles and would get NO advantages from general-purpose OS's -- and on
the other hand, most of them REALLY need hard real-time OS's to do their
jobs. I think not even MS tries to push Windows into most of THOSE
computers -- it would be just too crazy even for them.

So, in practice, I'd never go with the same OS on ALL computers. What
is most needed on computers playing such widely different roles is just
too different: an OS trying to cover ALL bases would be so huge, complicated
and unwieldy that its security AND general bugginess would suck (please
notice that Windows is the only OS really trying, even though not really on
anywhere near the WHOLE gamut -- with Linux admittedly close behind;-).


Alex
David Mertz
2003-10-07 18:13:39 UTC
Permalink
|> def posneg(filter,iter):
|> results = ([],[])
|> for x in iter:
|> results[not filter(x)].append(x)
|> return results
|> collect_pos,collect_neg = posneg(some_property, some_file_name)

Pascal Costanza <costanza at web.de> wrote previously:
|What about dealing with an arbitrary number of filters?

Easy enough:

def categorize_exclusive(filters, iter):
results = tuple([[] for _ in len(filters)])
for x in iter:
for n, filter in enumerate(filters):
if filter(x):
results[n].append(x)
break
return results

Or if you want to let things fall in multiple categories:

def categorize_inclusive(filters, iter):
results = tuple([[] for _ in len(filters)])
for x in iter:
for n, filter in enumerate(filters):
if filter(x):
results[n].append(x)
return results

Or if you want something to satisfy ALL the filters:

def categorize_compose(filters, iter):
results = tuple([[] for _ in len(filters)])
for x in iter:
results[compose(filters)(x)].append(x)
return results

The implementation of 'compose()' is left as an exercise to readers :-).
Or you can buy my book, and read the first chapter.

Yours, David...

--
Keeping medicines from the bloodstreams of the sick; food from the bellies
of the hungry; books from the hands of the uneducated; technology from the
underdeveloped; and putting advocates of freedom in prisons. Intellectual
property is to the 21st century what the slave trade was to the 16th.
--
Buy Text Processing in Python: http://tinyurl.com/jskh
Steve Williams
2003-10-12 03:37:09 UTC
Permalink
Alex Martelli wrote:
[snip]
but we DO want to provide very clear and precise error diagnostics, of course,
and the language/metalanguage issue is currently open). You will note
that this use of macros involves none of the issues I have expressed about
them (except for the difficulty of providing good error-diagnostics, but
that's of course solvable).
finally, a breath of fresh air.
David Eppstein
2003-10-06 17:46:13 UTC
Permalink
In article
<my-first-name.my-last-name-0610030955090001 at k-137-79-50-101.jpl.nasa.go
v>,
: (with-collector collect
: (do-file-lines (l some-file-name)
: (if (some-property l) (collect l))))
: This returns a list of all the lines in a file that have some property.
OK, that's _definitely_ just a filter: filter someproperty somefilename
Perhaps throw in a fold if you are trying to abstract "collect".
The net effect is a filter, but again, you need to stop thinking about the
"what" and start thinking about the "how", otherwise, as I said, there's
no reason to use anything other than machine language.
Answer 1: literal translation into Python. The closest analogue of
with-collector etc would be Python's simple generators (yield keyword)
and do-with-file-lines is expressed in python with a for loop. So:

def lines_with_some_property(some_file_name):
for l in some_file_name:
if some_property(l):
yield l

Your only use of macros in this example is to handle the with-collector
syntax, which is handled in a clean macro-free way by Python's "yield".
So this is unconvincing as a demonstration of why macros are a necessary
part of a good programming language.

Of course, with-collector could be embedded in a larger piece of code,
while using yield forces lines_with_some_property to be a separate
function, but modularity is good...

Answer 2: poetic translation into Python. If I think about "how" I want
to express this sort of filtering, I end up with something much less
like the imperative-style code above and much more like:

[l for l in some_file_name if some_property(l)]

I have no problem with the assertion that macros are an important part
of Lisp, but you seem to be arguing more generally that the lack of
macros makes other languages like Python inferior because the literal
translations of certain macro-based code are impossible or more
cumbersome. For the present example, even that argument fails, but more
generally you'll have to also convince me that even a freer poetic
translation doesn't work.
--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
Hans Nowak
2003-10-05 01:20:40 UTC
Permalink
|shocked at how awkward Paul Graham's "accumulator generator" snippet is
| self.n = n
| self.n += i
| return self.n
Me too. The way I'd do it is probably a lot closer to the way Schemers
... accum[0]+=i
... return accum[0]
...
foo(1)
1
foo(3)
4
Shorter, and without an awkward class.
Yah, but instead it abuses a relatively obscure Python feature... the fact that
default arguments are created when the function is created (rather than when it
is called). I'd rather have the class, which is, IMHO, a better way to
preserve state than closures. (Explicit being better than implicit and all
that... :-)
--
Hans (hans at zephyrfalcon.org)
http://zephyrfalcon.org/
Hartmann Schaffer
2003-10-14 16:38:35 UTC
Permalink
In article <raffaelcavallaro-583E78.11332714102003 at netnews.attbi.com>,
(flet ((add-offset (x) (+ x offset)))
(map 'list #'add-offset some-list))
But flet is just lambda in drag. I mean real, named functions, with
(add-offset the-list)
instead of either of theversions you gave.
the version he gave has the advantage that it doesn't clutter up the
namespace of the environment. with

(map (lambda (x) (+ x offset)) the-list)

you have everything that is relevant in the immediate neighborhood of
the statement. with a separate defun you have to search the program
to see what the function does. i agree that for lengthy functions
defining it separately and just writing the name is preferrable.
...
I guess I'm arguing that the low level implementation details should not
be inlined by the programmer, but by the compiler. To my eye, anonymous
functions look like programmer inlining.
i would call this a misconception

hs
--
ceterum censeo SCO esse delendam
Robin Becker
2003-10-11 23:16:17 UTC
Permalink
In article <gGWhb.200390$hE5.6777507 at news1.tin.it>, Alex Martelli
<aleaxit at yahoo.com> writes
...
bombs waiting to go off since they have long standing prior meanins
not in any way associated with this type of operation. OTOH, if you
really wanted them, you could define them.
Is it a good thing that you can define "bombs waiting to go off"?
Python's reply "There should be one-- and preferably only one --
obvious way to do it."
This then is probably the best reason to _not_ use Python for anything
other than the trivial. It has long been known in problem solving
(not just computation) that multiple ways of attacking a problem, and
shifting among those ways, tends to yield the the better solutions.
One, and preferably only one, of those ways should be the obvious one,
i.e., the best solution. There will always be others -- hopefully they'll
be clearly enough inferior to the best one, that you won't have to waste
too much time considering and rejecting them. But the obvious one
"may not be obvious at first unless you're Dutch".
The worst case for productivity is probably when two _perfectly
equivalent_ ways exist. Buridan's ass notoriously starved to death in
just such a worst-case situation; groups of programmers may not go
quite as far, but are sure to waste lots of time & energy deciding.
Alex
I'm not sure when this concern for the one true solution arose, but even
GvR provides an explicit example of multiple ways to do it in his essay
http://www.python.org/doc/essays/list2str.html

Even in Python there will always be tradeoffs between clarity and
efficiency.
--
Robin Becker
Steve VanDevender
2003-10-05 06:58:32 UTC
Permalink
Lisp (and possibly other languages I am not familiar with) adds the
alternative of *not* evaluating arguments but instead passing them as
unevaluated expressions. In other words, arguments may be
*implicitly* quoted. Since, unlike as in Python, there is no
alternate syntax to flag the alternate argument protocol, one must, as
far as I know, memorize/learn the behavior for each function. The
syntactic unification masks but does not lessen the semantic
divergence. For me, it made learning Lisp (as far as I have gotten)
more complicated, not less, especially before I 'got' what going on.
What you're talking about are called "special forms" and are definitely
not functions, and are used when it is semantically necessary to leave
something in an argument position unevaluated (such as in 'cond' or
'if', Lisp 'defun' or 'setq', or Scheme 'define' or 'set!').
Programmers create them using the macro facilities of Lisp or Scheme
rather than as function definitions. There are only a handful of
special forms one needs to know in routine programming, and each one has
a clear justification for being a special form rather than a function.

Lisp-family languages have traditionally held to the notion that Lisp
programs should be easily representable using the list data structure,
making it easy to manipulate programs as data. This is probably the
main reason Lisp-family languages have retrained the very simple syntax
they have, as well as why there is not different syntax for functions
and special forms.
Question: Python has the simplicity of one unified assignment
statement for the binding of names, attributes, slot and slices, and
multiples thereof. Some Lisps have the complexity of different
functions for different types of targets: set, setq, putprop, etc.
What about Scheme ;-?
Scheme has 'define', 'set!', and 'lambda' for identifier bindings (from
which 'let'/'let*'/'letrec' can be derived), and a number of mutation
operations for composite data types: 'set-car!'/'set-cdr!' for pairs,
'vector-set!' for mutating elements of vectors, 'string-set!' for
mutating strings, and probably a few others I'm forgetting.
--
Steve VanDevender "I ride the big iron" http://jcomm.uoregon.edu/~stevev
stevev at hexadecimal.uoregon.edu PGP keyprint 4AD7AF61F0B9DE87 522902969C0A7EE8
Little things break, circuitry burns / Time flies while my little world turns
Every day comes, every day goes / 100 years and nobody shows -- Happy Rhodes
Continue reading on narkive:
Loading...