Discussion:
Python syntax in Lisp and Scheme
(too old to reply)
Kenny Tilton
2003-10-03 13:52:07 UTC
Permalink
It's be interesting to know where people got the idea of learning
Scheme/LISP from (apart from compulsory university courses)?
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
That recently got repotted from another cliki and it's a little mangled,
but until after ILC2003 I am a little too swamped to clean it up.
Me and my big mouth. Now that I have adevrtised the survey far and wide,
and revisited it and seen up close the storm damage, sh*t, there goes
the morning. :) Well, I needed a break from RoboCells:

http://sourceforge.net/projects/robocells/

I am going to do what I can to fix up at least the road categorization,
and a quick glance revealed some great new entries, two that belong in
my Top Ten (with apologies to those getting bumped).

kenny
Bengt Richter
2003-10-08 22:12:31 UTC
Permalink
You know I think that this thread has so far set a comp.lang.* record
for civilitiy in the face of a massively cross-posted language
comparison thread. I was even wondering if it was going to die a quiet
death, too.
Ah well, We all knew it was too good to last. Have at it, lads!
Common Lisp is an ugly language that is impossible to understand with
crufty semantics
Scheme is only used by ivory-tower academics and is irerelevant to
real world programming
Python is a religion that worships at the feet of Guido vanRossum
combining the syntactic flaws of lisp with a bad case of feeping
creaturisms taken from languages more civilized than itself
There. Is everyone pissed off now?
No, that seems about right.
LOL ;-)

Regards,
Bengt Richter
Thomas F. Burdick
2003-10-08 06:24:35 UTC
Permalink
In article <xcvpth8rcfh.fsf at famine.ocf.berkeley.edu>,
I find the Lisp syntax hardly readable when everything looks alike,
mostly words and parentheses, and when every level of nesting requires
parens. I understand that it's easier to work with by macros, but it's
harder to work with by humans like I.
You find delimited words more difficult than symbols? For literate
people who use alphabet-based languages, I find this highly suspect.
Maybe readers of only ideogram languages might have different
preferences, but we are writing in English here...
well, there are a few occasions where symbols are preferrable. just
imagine mathematics with words only
Oh, certainly. Unlike most languages, Lisp lets you use symbols for
your own names (which is easily abused, but not very often). A bad
example:

;; Lets you swear in your source code, cartoonishly
(define-symbol-macro $%^&!
(error "Aw, $%^&! Something went wrong..."))

;; An example use
(defun foo (...)
(cond
...
(t $%^&!)))

And, although you generally use symbols from the KEYWORD package for
keyword arguments, you don't have to, and they don't have to be words:

(defgeneric convert-object (object new-type)
(:documentation "Like an extensible COERCE."))

(defun convert (object &key ((-> to)))
"Sugary"
(convert-object object to))

(defconstant -> '-> "More sugar")

;; Example usage
(convert *thing* -> (class-of *other-thing*))

Of course, these are lame examples, but they show that Lisp *can*
incorporate little ascii-picture-symbols. Good examples would
necessarily be very domain-dependant.
--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
Alex Martelli
2003-10-12 17:06:52 UTC
Permalink
[quantum programming]
While an interesting topic, it's something I'm not going to worry about.
Me neither, for now.
And if I did, it would be in Python ;)
I suspect no existing language would be anywhere near adequate.
But if any current programming concept could stretch there, it might
be that of "unrealized until looked-into set of things", as in, Haskell's
"lazy" (nonstrict) lists. Now lists are sequential and thus quantumly
inappropriate, but perhaps it's a start.
I bring it up as a counter-example to the idea that all modes of
programming have been and can be explored in a current Lisp.
I conjectured one interesting possibility -- that of handling ensembles
of possible solutions to a given problem.
I suspect we may have to map the 'ensembles' down to sets of
items, just as we generally map concurrency down to sets of
sequential actions, in order to be able to reason about them (though
I have no proof of that conjecture). IF you have to map more
complicated intrinsics down to sequential, deterministic, univocal
"things", I'm sure you could do worse than Lisp. As to whether
that makes more sense than dreaming up completely different
languages having (e.g.) nondeterminism or multivocity as more
intrinsic concepts, I pass: it depends mostly on what human beings
will find they need to use in order to reason most effectively in
this new realm -- and quite likely different humans will find they
have different needs in this matter.
In retrospect I should have given a more obvious possibility.
As some point I hope to have computer systems I can program
by voice in English, as in "House? Could you wake me up
at 7?" That is definitely a type of programming, but Lisp is
Yeah, well, I fear the answer will be yes (it could), but it won't
do so since you haven't _asked_ it to wake you up, only if it
could. ME, I definitely don't want to use natural language with
all of its ambiguity for anything exept communicating with
other human beings, thankyouverymuch.
a language designed for text, not speed.
*blink* what does THAT doubtful assertion have to do with anything
else we were discussing just now...? I think lisp was designed for
lists (as opposed to, say, snobol, which WAS "designed for text") and
that they're a general enough data structure (and supplemented in
today's lisps with other good data structures) that they'll be quite good
for all kinds of 'normal' (deterministic &c) programming. As for speed,
I'm sure it's easier to get it out of lisp than out of python right now.
So what's your point, and its relation to the above...?
I believe it is an accepted fact that uniformity in GUI design is a good
thing because users don't need to learn arbitrarily different ways of
using different programs. You only need different ways of interaction
when a program actually requires it for its specific domain.
Yes, I agree this IS generally accepted (with, of course, some dissenters,
but in a minority).
My spreadsheet program looks different from my word processor
Sure, so do mine, but their menus are quite similar -- in as much as
it makes sense for them to have similar operations -- and ditto ditto
for their toolbars, keyboard shortcuts, etc etc. I.e. the differences
only come "when needed for a specific domain" just as Pascal just
said. So I don't know what you're intending with this answer.
is more in common. Still, the phrase "practicality beats purity" is
seems appropriate here.
Uniformity is more practical than diversity: e.g. ctrl-c as the Copy
operation everywhere means my fingers, even more than my brain, get
used to it. If you assign ctrl-c to some totally different operation in
your gui app "because you think it's more practical" you're gonna
drive me crazy, assuming I have to use your app. (That already
happens to me with the -- gnome based i think -- programs using
ctrl-z for minimize instead of undo -- I'm starting to have frayed
nerves about that even for GVIM, one of the programs I use most
often...:-).
I firmly believe people can in general easily handle much more
complicated syntax than Lisp has. There's plenty of room to
spare in people's heads for this subject.
Sure, but is it worth it?
Do you have any doubt to my answer? :)
Given the difficulty I'm having understanding your stance[s] in
this post, I do. My own answer would be that syntax sugar is
in people's head anyway because of different contexts -- infix
arithmetic taught since primary school, indentation in outlines
and pseudocode, etc etc -- so, following that already-ingrained
set of conventions takes no "room to spare in people's heads" --
indeed, the contrary -- it saves them effort. If people's head
started as "tabula rasa" it might be different, but they don't, so
that's a moot issue.

That much being said, I _do_ like prefix syntax. In some cases
I need to sum a+b+c+d and repeating that silly plus rather than
writing (+ a b c d) grates. Or I need to check a<b<c<d and
again I wish I could more summarily write (< a b c d). When I
designed my own language for bridge-hands evaluation, BBL, I
used prefix notation, though in the form operator ( operands )
[which I thought would have been easier for other bridge players
to use], e.g.:

& ( # weak NT opener requires AND of two things:
s ( 1 g 4 3 3 3 # shape 4333 (any), or
2 g 4 4 3 2 # 4432 (any), or
3 3- 3- 3- 5 # 5332 with 5 clubs, or
4 3- 3- 5 3- # 5332 with 5 diamonds
)
< ( 12 # as well as, 13-15 range for
\+ SHDC c( 4 3 2 1 0) # normal Milton-Work pointcount
16
)
)

Maybe readers are starting to understand why I don't WANT to
use a language I design myself;-). Anyway, the language was
NOT enthusiastically taken up, until I wrote code generators with
a GUI accepting conditions in more "ordinary looking" notations
and building this, ahem, intrinsically "better" one;-) -- then, but only
then, did other players start using this to customize hand generators
and the like. (Yes, I did have macros, but puny enough that they
still required operator(operands) syntax -- they basically served only
to reduce duplication, or provide some little abstraction, not to
drastically change the language syntax at all). Ah well -- maybe I
should just put the BBL (Turbo Pascal) implementation and (Italian-
language) reference manual online -- it still moves nostalgia in me!-)
Convenience is what matters. If you are able to conveniently express
solutions for hard problems, then you win. In the long run, it doesn't
My APL experience tells me this is false: conveniently expressing
solutions is HALF the problem -- you (and others!) have to be
able to read them back and maintain and alter them later too.
matter much how things behave in the background, only at first.
Personally, I would love to write equations on a screen like I
would on paper, with integral signs, radicals, powers, etc. and
not have to change my notation to meet the limitations of computer
input systems.
So jot your equations on a tablet-screen and look for a good
enriched text recognition system. What's programming gotta
do with it?
For Lisp is a language tuned to keyboard input and not the full
range of human expression. (As with speech.)
Python even more so on the output side -- try getting a screen-reader to
do a halfway decent job with it. But what does this matter here?
(I know, there are people who can write equations in TeX as
fast as they can on paper. But I'm talking about lazy ol' me
who wants the covenience.)
Or, will there ever be a computer/robot combination I can
teach to dance? Will I do so in Lisp?
You may want to teach by showing and having the computer
infer more general rules from example. Whether the inference
engine will be best built in lisp, prolog, ocaml, mozart, whatever,
I dunno. I don't think it will be optimally done in Python, though.
"Horses for courses" is my philosophy in programming.
It seems to me that in Python, just as in most other languages, you
always have to be aware that you are dealing with classes and objects.
Given the "everything is an object" (classes included) and every object
belongs to a class, you could indeed say that -- in much the same sense
as you may be said to always be aware that you're breathing air in
everyday life. Such awareness is typically very subconscious, of course.
Why should one care? Why does the language force me to see that when it
really doesn't contribute to the solution?
I'm not sure in what sense "python forces you to see" that, e.g.,
the number 2 is an object -- or how can that fail to "contribute to
the solution". Care to exemplify?
Hmmm.. Is the number '1' an object? Is a function an object?
What about a module? A list? A class?
Yes to all of the above, in Python. I don't get your point.
print sum(range(100))
4950
Where in that example are you aware that you are dealing with classes
and objects?
Me? At every step -- I know 'sum' names a builtin object that is a
function (belonging to the class of builtin functions) taking one argument
which is a sequence, 'range' names another builtin object returning
a list object, etc. I'm not directly dealing with any of their classes --
I know they belong to classes, like any object does, but I have no need
to think about them in this specific statement (in fact, I hardly ever do;
signature-based polymorphism is what I usually care about, not class
membership, far more often than not).

But I don't get your points -- neither Andrew's nor Pascal's. How does
this differ from the awareness I might have in some macro-enhanced
lisp where I would type (print (sum (range 100))) or the like?
conjecture is that additional syntax can make some things easier.
That a problem can be solved without new syntax does not
contradict my conjecture.
But even if we provisionally concede your conjecture we are still
left wondering: is the degree of easing so high that it overcomes
the inevitable increase in complication, needed for a language to
have N+1 syntax forms where previously it only had N? I.e., it's
in general a difficult engineering tradeoff, like many in language
design -- which is why I'd rather delegate the decisions on these
tradeoffs to individuals, groups and processes with a proven track
record for making a lot of them with complexive results that I find
delightful, rather than disperse them to myself & many others
(creating lots of not-quite-congruent language dialects).


Alex
David Mertz
2003-10-16 04:14:58 UTC
Permalink
|> Here's a quick rule that is pretty damn close to categorically true for
|> Python programming: If you use more than five levels of indent, you are
|> coding badly. Something is in desperate need of refactoring.

Pascal Bourguignon <spam at thalassa.informatimago.com> wrote previously:
|Here is an histogram of the depths of the top level sexps found in my
|emacs sources:
|((1 . 325) (2 . 329) (3 . 231) (4 . 163) (5 . 138) (6 . 158) (7 . 102)
| (8 . 94) (9 . 63) (10 . 40) (11 . 16) (12 . 20) (13 . 9) (14 . 4)
| (15 . 5) (16 . 4) (17 . 2) (19 . 2) (23 . 1))
|Am I depraved in writting code with depths down to 23?

As I've written lots of times in these threads, I haven't really used
Lisp. In fact, I really only did my first programming in Scheme (for an
article on SSAX) in the last couple weeks; I know Scheme isn't Common
Lisp, no need to point that out again. However, I -have- read a fair
number of snippets of Lisp code over the years, so my familiarity runs
slightly deeper than the couple weeks.

All that said, my gut feeling is that depth 23 is, indeed, ill-designed.
Even the more common occurrences of 12 or 13 levels seems like a lot
more than my brain can reason about. I'm happy to stipulate that
Bourguignon is smarter than I am... but I'm still confident that I can
do this sort of thinking better than 95% of the people who might have to
READ his code.

And the purpose of code, after all, is to COMMUNICATE ideas: firstly to
other people, and only secondly to machines.

|Ok, in Python, you may have also expressions that will further deepen
|the tree, but how can you justify an artificial segregation between
|indentation and sub-expressions?

Because it's Python! There is a fundamental syntactic distinction
between statments and expressions, and statements introduce blocks
(inside suites, bare expressions can occur though--usually functions
called for their side effects). It is the belief of the BDFL--and of
the large majority of programmers who use Python--that a syntactic
distinction between blocks with relative indention and experessions
that nest using parens and commas AIDS READABILITY.

I can say experientially, and from reading and talking to other
Pythonistas, that my brain does a little flip when it finishes
identifying a suite, then starts identifying the parts of an expression.
And conveniently, in Python, the sort of thinking I need to do when I
read or write the lines of a suite is a bit different than for the parts
of an expression. Not just because I am deceived by the syntax, but
because the language really does arrange for a different sort of thing
to go on in statements versus expressions (obviously, there are SOME
overlaps and equivalences; but there's still a useful pattern to the
distinction).

Still, for a real comparison of depth, I suppose I'd need to look at the
combined depth of indent+paren-nesting. Even for that, well-designed
Python programs top out at 7 or 8, IMO. Maybe something a little deeper
reasonably occurs occassionally, but the histogram would sure look
different from Pascal's ("Flat is better than nested").

Yours, David...

--
---[ to our friends at TLAs (spread the word) ]--------------------------
Iran nuclear neocon POTUS patriot Pakistan weaponized uranium invasion UN
smallpox Gitmo Castro Tikrit armed revolution Carnivore al-Qaeda sarin
---[ Gnosis Software ("We know stuff") <mertz at gnosis.cx> ]---------------
Pascal Costanza
2003-10-09 13:59:24 UTC
Permalink
you can use macros to do everything one could use HOFs for (if you
really want).
I should have added: As long as it should execute at compile time, of
course.
Really? What about arbitrary recursion?
I don't see the problem. Maybe you have an example? I am sure the
Lisp'ers here can come up with a macro solution for it.
I'm not terribly familiar with the details of Lisp macros but since
recursion can easily lead to non-termination you certainly need tight
restrictions on recursion among macros in order to ensure termination of
macro substitution, don't you? Or at least some ad-hoc depth limitation.
The Lisp mindset is not to solve problems that you don't have.

If your code has a bug then you need to debug it. Lisp development
environments provide excellent debugging capabilities out of the box.
Don't guess how hard it is when you don't have the experience yet.


Pascal
--
Pascal Costanza University of Bonn
mailto:costanza at web.de Institute of Computer Science III
http://www.pascalcostanza.de R?merstr. 164, D-53117 Bonn (Germany)
David Eppstein
2003-10-22 03:33:23 UTC
Permalink
In article <1jclovopokeogrdajo6dfmhm090cdllfki at 4ax.com>,
It's certainly true that mathematicians do not _write_
proofs in formal languages. But all the proofs that I'm
aware of _could_ be formalized quite easily. Are you
aware of any counterexamples to this? Things that
mathematicians accept as correct proofs which are
not clearly formalizable in, say, ZFC?
I am not claiming that it is a counterexample, but I've always met
with some difficulties imagining how the usual proof of Euler's
theorem about the number of corners, sides and faces of a polihedron
(correct terminology, BTW?) could be formalized. Also, however that
could be done, I feel an unsatisfactory feeling about how complex it
would be if compared to the conceptual simplicity of the proof itself.
Which one do you think is the usual proof?
http://www.ics.uci.edu/~eppstein/junkyard/euler/

Anyway, this exact example was the basis for a whole book about what is
involved in going from informal proof idea to formal proof:
http://www.ics.uci.edu/~eppstein/junkyard/euler/refs.html#Lak
--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
Alex Martelli
2003-10-04 19:48:54 UTC
Permalink
Bengt Richter wrote:
...
I like the Bunch class, but the name suggests vegetables to me ;-)
Well, I _like_ vegetables...
BTW, care to comment on a couple of close variants of Bunch with
per-object class dicts? ...
def mkNSC(**kwds): return type('NSC', (), kwds)()
Very nice (apart from the yecchy name;-).
or, stretching the one line a bit to use the instance dict,
def mkNSO(**kwds): o=type('NSO', (), {})(); o.__dict__.update(kwds);
return o
I don't see the advantage of explicity using an empty dict and then
updating it with kwds, vs using kwds directly.
I'm wondering how much space is actually wasted with a throwaway class. Is
there a lazy copy-on-write kind of optimization for class and instance
dicts that prevents useless proliferation? I.e.,
I strongly doubt there's any "lazy copy-on-write" anywhere in Python.
The "throwaway class" will be its dict (which, here, you need -- that's
the NS you're wrapping, after all) plus a little bit (several dozen bytes
for the typeobject, I'd imagine); an instance of Bunch, probably a bit
smaller. But if you're going to throw either away soon, who cares?
but I think the "purer" (more extreme) versions are
interesting "tipizations" for the languages, anyway.
Oh goody, a new word (for me ;-). Would you define "tipization"?
I thought I was making up a word, and slipped by spelling it
as in Italiano "tipo" rather than English "type". It appears
(from Google) that "typization" IS an existing word (sometimes
mis-spelled as "tipization"), roughly in the meaning I intended
("characterization of types") -- though such a high proportion
of the research papers, institutes, etc, using "typization",
seems to come from Slavic or Baltic countries, that I _am_
left wondering...;-).


Alex
Alan Crowe
2003-10-04 11:31:45 UTC
Permalink
If a set of macros could be written to improve LISP
syntax, then I think that might be an amazing thing. An
interesting question to me is why hasn't this already been
done.
I think the issue is the grandeur of the Lisp vision. More
ambitious projects require larger code bases. Ambition is
hard to quantify. Nevertheless one must face the issue of
scaling. Does code size go as the cube of ambition, or is it
the fourth power of ambition? Or something else entirely.

Lisp aspires to change the exponent, not the constant
factor. The constant factor is very important. That is why
CL has FILL :-) but shrinking the constant factor has been
done (and with C++ undone).

Macros can be used to abbreviate code. One can spot that one
is typing similar code over and over again. One says
"whoops, I'm typing macro expansions". Do you use macros to
tune the syntax, so that you type N/2 characters instead of
N characters, or do you capture the whole concept in macro
and eliminate the repetition altogether?

The point is that there is nowhere very interesting to go
with syntax tuning. It is the bolder goal of changing the
exponent, and thus seriously enlarging the realm of the
possible, that excites.

Alan Crowe
Ingvar Mattsson
2003-10-09 11:16:09 UTC
Permalink
method overloading,
Joe> Now I'm *really* confused. I thought method overloading involved
Joe> having a method do something different depending on the type of
Joe> arguments presented to it. CLOS certainly does that.
He probably means "operator overloading" -- in languages where
there is a difference between built-in operators and functions,
their OOP features let them put methods on things like "+".
Lisp doesn't let you do that, because it turns out to be a bad idea.
When you go reading someone's program, what you really want is for
the standard operators to be doing the standard and completely
understood thing.
Though if one *really* wants to have +, -, * and / as generic
functions, I imagine one can use something along the lines of:

(defpackage "GENERIC-ARITHMETIC"
(:shadow "+" "-" "/" "*")
(:use "COMMON-LISP"))

(in-package "GENERIC-ARITHMETIC")
(defgeneric arithmetic-identity (op arg))

(defmacro defarithmetic (op)
(let ((two-arg
(intern (concatenate 'string "TWO-ARG-" (symbol-name op))
"GENERIC-ARITHMETIC"))
(cl-op (find-symbol (symbol-name op) "COMMON-LISP")))
`(progn
(defun ,op (&rest args)
(cond ((null args) (arithmetic-identity ,op nil))
((null (cdr args))
(,two-arg (arithmetic-identity ,op (car args))
(car args)))
(t (reduce (function ,two-arg)
(cdr args)
:initial-value (car args)))))
(defgeneric ,two-arg (arg1 arg2))
(defmethod ,two-arg ((arg1 number) (arg2 (number)))
(,cl-op arg1 arg2)))))

Now, I have (because I am lazy) left out definitions of the generic
function ARITHMETIC-IDENTITY (general idea, when fed an operator and
NIL, it returns the most generic identity, when fed an operator and an
argument, it can return a value that is more suitable) and there's
probably errors in the code, too.

But, in principle, that should be enough of a framework to build from,
I think.

//Ingvar
--
My posts are fair game for anybody who wants to distribute the countless
pearls of wisdom sprinkled in them, as long as I'm attributed.
-- Martin Wisse, in a.f.p
Edi Weitz
2003-10-16 21:08:43 UTC
Permalink
For simple use of built-in libraries,
http://online.effbot.org/2003_08_01_archive.htm#troll
looks like a good test case.
Quick hack follows.

edi at bird:/tmp > cat troll.lisp
(asdf:oos 'asdf:load-op :aserve)
(asdf:oos 'asdf:load-op :cl-ppcre)

(defparameter *scanner*
(cl-ppcre:create-scanner
"<a href=\"AuthorThreads.asp[^\"]*\">([^<]+)</a></td>\\s*
<td align=\"center\">[^<]+</td>\\s*
<td align=\"center\">[^<]+</td>\\s*
<td align=\"center\">\\d+</td>\\s*
<td align=\"center\">(\\d+)</td>\\s*
<td align=\"center\">(\\d+)</td>\\s*
<td align=\"center\">\\d+</td>\\s*
<td align=\"center\">(\\d+)</td>\\s*"))

(defun troll-checker (name)
(let ((target
(net.aserve.client:do-http-request
(format nil "http://netscan.research.microsoft.com/Static/author/authorprofile.asp?searchfor=~A" name)
:protocol :http/1.0)))
(cl-ppcre:do-scans (match-start match-end reg-starts reg-ends *scanner* target)
(flet ((nth-group (n)
(subseq target (aref reg-starts n) (aref reg-ends n))))
(let* ((group (nth-group 0))
(posts (parse-integer (nth-group 1)))
(replies (parse-integer (nth-group 2)))
(threads-touched (parse-integer (nth-group 3)))
(reply-to-post-ratio (/ replies posts))
(threads-to-post-ratio (/ threads-touched posts)))
(unless (< posts 10)
(format t "~55A R~,2F T~,2F ~:[~;TROLL~:[?~;!~]~]~%"
(subseq group 0 (min 55 (length group)))
reply-to-post-ratio
threads-to-post-ratio
(and (> reply-to-post-ratio .8)
(< threads-to-post-ratio .4))
(< threads-to-post-ratio .2))))))))

(compile 'troll-checker)

edi at bird:/tmp > cmucl
; Loading #p"/home/edi/.cmucl-init".
CMU Common Lisp 18e, running on bird.agharta.de
With core: /usr/local/lib/cmucl/lib/lisp.core
Dumped on: Thu, 2003-04-03 15:47:12+02:00 on orion
Send questions and bug reports to your local CMUCL maintainer,
or see <http://www.cons.org/cmucl/support.html>.
Loaded subsystems:
Python 1.1, target Intel x86
CLOS 18e (based on PCL September 16 92 PCL (f))
* (load "troll")

; loading system definition from /usr/local/lisp/Registry/aserve.asd into
; #<The ASDF1017 package, 0/9 internal, 0/9 external>
; registering #<SYSTEM ASERVE {4854AEF5}> as ASERVE
; loading system definition from /usr/local/lisp/Registry/acl-compat.asd into
; #<The ASDF1059 package, 0/9 internal, 0/9 external>
; registering #<SYSTEM ACL-COMPAT {4869AD35}> as ACL-COMPAT
; loading system definition from /usr/local/lisp/Registry/htmlgen.asd into
; #<The ASDF1145 package, 0/9 internal, 0/9 external>
; registering #<SYSTEM HTMLGEN {487E64C5}> as HTMLGEN
; loading system definition from /usr/local/lisp/Registry/cl-ppcre.asd into
; #<The ASDF1813 package, 0/9 internal, 0/9 external>
; registering #<SYSTEM #:CL-PPCRE {48F32835}> as CL-PPCRE
; Compiling LAMBDA (NAME):
; Compiling Top-Level Form:
T
* (troll-checker "edi at agharta.de")
comp.lang.lisp R0.93 T0.63
NIL
* (troll-checker "eppstein at ics.uci.edu")
rec.photo.digital R1.00 T0.76
rec.arts.sf.written R0.99 T0.57
comp.lang.python R0.98 T0.64
rec.photo.equipment.35mm R1.00 T0.73
sci.math R1.00 T0.77
rec.puzzles R1.00 T0.75
comp.theory R1.00 T0.56
comp.graphics.algorithms R1.00 T0.87
comp.sys.mac.apps R1.00 T0.69
NIL
* (troll-checker "spam at thalassa.informatimago.com")
comp.lang.lisp R0.91 T0.44
fr.comp.os.unix R1.00 T0.70
es.comp.os.linux.programacion R1.00 T0.67
fr.comp.lang.lisp R1.00 T0.40 TROLL?
comp.unix.programmer R1.00 T0.92
sci.space.moderated R1.00 T0.43
gnu.emacs.help R0.95 T0.84
sci.space.policy R1.00 T0.33 TROLL?
alt.folklore.computers R1.00 T0.43
comp.lang.scheme R0.83 T0.58
fr.comp.os.mac-os.x R0.92 T0.83
NIL

Granted, Portable AllegroServe[1] and CL-PPCRE[2] aren't "built-in"
(but freely available, compatible with various CL compilers, and easy
to install) and Python might have a bit more syntactic sugar but it
wasn't _too_ hard to do that in Lisp.

Edi

[1] <http://portableaserve.sf.net/>
[2] <http://weitz.de/cl-ppcre/>
Dave Benjamin
2003-10-09 03:01:02 UTC
Permalink
For instance, I always thought this was a cooler alternative to the
try/finally block to ensure that a file gets closed (I'll try not to
open('input.txt', { |f|
do_something_with(f)
do_something_else_with(f)
})
f = open('input.txt')
do_something_with(f)
do_something_else_with(f)
f.close()
"Explicit is better than implicit"
In that case, why do we eschew code blocks, yet have no problem with the
implicit invocation of an iterator, as in:

for line in file('input.txt'):
do_something_with(line)

This is not to say that I dislike that behavior; in fact, I find it
*beneficial* that the manner of looping is *implicit* because you can
substitute a generator for a sequence without changing the usage. But
there's little readability difference, IMHO, between that and:

file('input.txt').each_line({ |line|
do_something_with(line)
})

Plus, the first example is only obvious because I called my iteration
variable "line", and because this behavior is already widely known. What
if I wrote:

for byte in file('input.dat'):
do_something_with(byte)

That would be a bit misleading, no? But the mistake isn't obvious. OTOH,
in the more explicit (in this case) Ruby language, it would look silly:

open('input.txt').each_line { |byte|
# huh? why a byte? we said each_line!
}

I think this is important to point out, because the implicit/explicit
rule comes up all the time, yet Python is implicit about lots of things!
To name a few:

- for loops and iterators
- types of variables
- dispatching via subclass polymorphism
- coercion (int->float, int->long...)
- exceptions (in contrast with Java's checked exceptions)
- __magic_methods__
- metaclasses
- nested scopes (compared to yesteryear's lambda x, y=y, z=z: ...)
- list comprehensions

In all of the above cases (with a bit of hesitation toward the voodoo of
metaclasses) I think Python is a better language for it. On the other
hand, Perl's implicit $_ variable is a good example of the hazards of
implicitness; that can be downright confusing. So, it's not cut and dry
by any means.

If all you're saying is that naming something is better than not naming
something because explicit is better than implicit, I'd have to ask why:

a = 5
b = 6
c = 7
d = a + b
e = c / 2
result = d + e
return result

Is any better than:

...
return (a + b) + (c / 2)

To me, it's the same issue. Why should I have to name something that I'm
just going to return in the next statement, or pass as a parameter, and
then be done with it? Does that really increase either readability or
understandability? Why should I name something that I'm not going to ask
for later?
Even your example clearly shows that try block is much more readable and
understandable.
That's why it's being considered evil by majority of python developers.
Readability is a moving target. I think that the code block syntax
strikes a nice balance between readability and expressiveness. As far as
what the majority of Python developers consider evil, I don't think
we've got the stats back on that one.
But the anonymous version still looks more concise to me.
Python prioritize things diferently than other languages.
It's not an APL. "Readability counts"
This is nothing like APL... if anything, it's like Smalltalk, a language
designed to be readable by children! I realize that APL sacrificed
readability for expressiveness to an uncomfortable extreme, but I really
think you're comparing apples and oranges here. List comprehensions are
closer to APL than code blocks.

Dave
Raymond Wiker
2003-10-06 11:09:00 UTC
Permalink
1.) Inventing new control structures (implement lazy data structures,
implement declarative control structures, etc.)
=> This one is rarely needed in everyday application programming and
can easily be misused.
This is, IMHO, wrong. One particular example is creating
macros (or read macros) for giving values to application-specific data
structures.
You have to know if you want a sharp knife (which may hurt you when
misused) or a less sharper one (where it takes more effort to cut
with).
It is easier to hurt yourself with a blunt knife than a sharp
one.
--
Raymond Wiker Mail: Raymond.Wiker at fast.no
Senior Software Engineer Web: http://www.fast.no/
Fast Search & Transfer ASA Phone: +47 23 01 11 60
P.O. Box 1677 Vika Fax: +47 35 54 87 99
NO-0120 Oslo, NORWAY Mob: +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
Corey Coughlin
2003-10-11 00:00:48 UTC
Permalink
You are mostly correct about Japanese, I took a year of it in college
and it is a fairly standard SOV language. (Like Latin, oddly enough.)
And I'm sure you're right about RPN vs. PN, I always get those
confused. Which is kind of the point, really, having studied math
since I was a kid I got used to stuff like "y = mx + b", can you
blame me if I have an easier time with "y = m*x + b" as opposed to
"(let y (+ (* m x) b))" (Forgive me if the parenthesis on that are
off, the newsreader editor doesn't match them, and maybe I need a
'setq' instead of a 'let' or some other thing, I'm not entirely sure.)
(And again, is the point getting more clear?) And thanks for backing
me up on car's and cdr's, I never could understand why a language
ostensibly designed for 'list processing' has such a bizarre way to
reference items in a list. But is (nth 10 mylist) really as easy as
mylist[10]? My intuition says no, not really.

Sure, I can appreciate looking at things in different ways, and it is
nice to learn new things and see how they apply. But if John Grisham
learns Japanese, does that mean he should write all his books in
Japanese? Or should he stick to English? I suppose if I were a real
CS guy (I'm actually an electrical engineer, the scheme course was one
of the two CS courses I took in college, so I'm mostly self taught) or
if I worked within a big group of Lisp programmers, I would probably
feel more comfortable with it. Since I now mostly work as an isolated
programmer for other engineers, and the last language I was using for
everything was C, Python is a huge improvement, and it doesn't give me
too much of a headache. Sure, it's not perfect. But there's no way
I'm going to adopt Lisp as a perfect language anytime soon. That is,
if I want to keep hitting my deadlines and getting paid. And sure, I
may get comfortable and miss out on cool stuff, but on the upside,
I'll be comfortable.

Oh, and if I'm writing in this thread, I suppose I should comment on
how bad lisp macros are. Except I know nothing about them. But it
seems like most languages have dark corners like that, where you can
do thing above and beyond your standard programming practices. Python
has metaclasses, which give me a headache most of the time, so I don't
really use them at all. But I seem to get plenty of stuff done
without using them, so it works for me. If you really have to use
macros in Lisp to get things done, that sounds kind of troublesome,
but it would be consistent, it always seemed like really working well
in Lisp requires you to really know how everything works all at once,
which always kind of struck me as kind of a downside. But as I said,
I'm not the big CS guru, so Lisp just may not be for me in general.
Ah well, I suppose I'll get by with Python. :D
(Not to mention car, cdr, cadr, and
so on vs. index notation, sheesh.)
Yes, that is a real regret. It should have been useful to support
a kind of (nth 10 mylist) straight from the Scheme standard library.
Using parentheses and rpn everywhere makes lisp very easy
to parse, but I'd rather have something easy for me to understand and
That's why I prefer python, you
get a nice algebraic syntax with infix and equal signs, and it's easy
understand.
Python is
intuitive to me out of the box, and it just keeps getting better, so I
think I'll stick with it.
First, a minor correction: Lisp/Scheme is like (* 1 2) and that is
Polish Notation or prefix; Reverse Polish Notation or postfix would be
like (1 2 *).
From what I heard about the Japanese language I have formed the
possibly oversimplified impression that it is largely postfix.
Whereas in English we say "I beat you", they may say something like "I
you beat". So I suppose all of the existing programming notations -
Lisp's and Cobol's (* 1 2) and MULTIPLY 1 BY 2, Fortran's "intuitive"
1+2, and OO's one.add(two) - are very counterintuitive to them, and
they would really like the way of HP calculators, no?
And I suppose the ancient Romans (and even the modern Vaticans) would
laugh at this entire dilemma (or trilemma?) between ___fixes.
Intuition is acquired. It is purely a product of education or
brainwashing. There is nothing natural about it. And since it is
acquired, you may as well keep acquiring new intuitions and see more
horizons, rather than keep reinforcing old intuitions and stagnate.
Appreciating a foreign language such as Japanese some day is not a bad
idea.
Raffael Cavallaro
2003-10-12 19:54:59 UTC
Permalink
Lispniks are driven by the assumption that there is always the
unexpected. No matter what happens, it's a safe bet that you can make
Lisp behave the way you want it to behave, even in the unlikely event
that something happens that no language designer has ever thought of
before. And even if you cannot find a perfect solution in some cases,
you will at least be able to find a good approximation for hard
problems.
This I believe is the very crux of the matter. The problem domain to
which lisp has historically been applied, artificial intelligence,
more or less guaranteed that lisp hackers would run up against the
sorts of problems that no one had ever seen before. The language
therefore evolved into a "programmable programming language," to quote
John Foderaro (or whoever first said or wrote this now famous line).

Lisp gives the programmer who knows he will be working in a domain
that is not completely cut and dried, the assurance that his language
will not prevent him for doing something that has never been done
before. Python gives me the distinct impression that I might very well
run up against the limitations of the language when dealing with very
complex problems.

For 90% of tasks, even large projects, Python will certainly have
enough in its ever expanding bag of tricks to provide a clean,
maintainable solution. But that other 10% keeps lisp hackers from
using Python for exploratory programming - seeking solutions in
problem domains that have not been solved before.
Andrew Dalke
2003-10-09 18:45:47 UTC
Permalink
i realize that this thread is hopelessly amorphous, but this post did
introduce some concrete issues which bear concrete responses...
Thank you for the commentary.
i got only as far as the realization that, in order to be of any use,
unicode
data management has to support the eventual primitive string operations.
which
introduces the problem that, in many cases, these primitive operations
eventually devolve to the respective os api. which, if one compares apple
and
unix apis are anything but uniform. it is simply not possible to provide
them
with the same data and do anything worthwhile. if it is possible to give
some
concrete pointers to how other languages provide for this i would be
grateful.

Python does it by ignoring the respective os APIs, if I understand
your meaning and Python's implementation correctly. Here's some
more information about Unicode in Python

http://www.python.org/peps/pep-0100.html
http://www.python.org/peps/pep-0261.html
http://www.python.org/peps/pep-0277.html

http://www.python.org/doc/current/ref/strings.html

http://www.python.org/doc/current/lib/module-unicodedata.html
http://www.python.org/doc/current/lib/module-codecs.html
and i have no idea what people do with surrogate pairs.
See PEP 261 listed above for commentary, and you may want
to email the author of that PEP, Paul Prescod. I am definitely
not the one to ask.
yes, there are several available common-lisp implementations for http
clients
and servers. they offer significant trade-offs in api complexity,
functionality, resource requirements and performance.
And there are several available Python implementations for the same;
Twisted's being the most notable. But the two main distributions (and
variants like Stackless) include a common API for it, which makes
it easy to start, and for most cases is sufficient.

I fully understand that it isn't part of the standard, but it would be
useful if there was a consensus that "packages X, Y, and Z will
always be included in our distributions."
if one needs to _port_ it to a new lisp, yes. perhaps you skipped over the
list of lisps to which it has been ported. if you look at the #+/-
conditionalization, you may observe that the differences are not
significant.

You are correct, and I did skip that list.

Andrew
dalke at dalkescientific.com
MetalOne
2003-10-15 18:05:16 UTC
Permalink
Raffael Cavallaro

I don't know why but I feel like trying to summarize.

I initially thought your position was that lambdas should never be
used. I believe that Brian McNamara and Ken Shan presented powerful
arguments in support of lambda. Your position now appears to have
changed to state that lambdas are ok to use, but their use should be
restricted. One point would appear to desire avoiding duplicate
lambdas. This makes sense. Duplication of this sort is often found
in "if statment" conditional tests also. The next point would be to
name the function if a good name can be found. I believe that
sometimes the code is clearer than a name. Mathematical notation was
invented because natural language is imprecise. Sometimes a name is
better than the code. The name gives a good idea of the "how" and
perhaps you can defer looking at the "how". Sometimes I think using
code in combination with a comment is better. A comment can say a
little more than a name, and the code gives the precision. So as
Marcin said, it is a balancing act to create readable code.

I would like to say that I have found this entire thread very
comforting. I have been programming for 18 years now. For the most
part, when I read other peoples code I see nothing but 300+ line
functions. I have come to feel like most programmer's have no idea
what they are doing. But when you're writing small functions and
everybody else is writing 300+ line functions you begin to wonder if
it is you that is doing something wrong. It is nice to see that other
people actually do think about how to write and structure good code.
Erann Gat
2003-10-06 19:19:54 UTC
Permalink
In article <eppstein-9700A3.10461306102003 at news.service.uci.edu>, David
In article
<my-first-name.my-last-name-0610030955090001 at k-137-79-50-101.jpl.nasa.go
v>,
: (with-collector collect
: (do-file-lines (l some-file-name)
: (if (some-property l) (collect l))))
: This returns a list of all the lines in a file that have some property.
OK, that's _definitely_ just a filter: filter someproperty somefilename
Perhaps throw in a fold if you are trying to abstract "collect".
The net effect is a filter, but again, you need to stop thinking about the
"what" and start thinking about the "how", otherwise, as I said, there's
no reason to use anything other than machine language.
Answer 1: literal translation into Python. The closest analogue of
with-collector etc would be Python's simple generators (yield keyword)
yield l
You left out the with-collector part.

But it's true that my examples are less convincing given the existence of
yield (which I had forgotten about). But the point is that in pre-yield
Python you were stuck until the langauge designers got around to adding
it.

I'll try to come up with a more convincing short example if I find some
free time today.

E.
Greg Ewing (using news.cis.dfn.de)
2003-10-13 02:28:57 UTC
Permalink
It has sometimes been said that Lisp should use first and
rest instead of car and cdr
I used to think something like that would be more logical, too.
Until one day it occurred to me that building lists is only
one possible, albeit common, use for cons cells. A cons cell
is actually a completely general-purpose two-element data
structure, and as such its accessors should have names that
don't come with any preconceived semantic connotations.

From that point of view, "car" and "cdr" are as good
as anything!
--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg
Björn Lindberg
2003-10-10 14:16:42 UTC
Permalink
If your problems are trivial, I suppose the presumed lower startup
costs of Python may mark it as a good solution medium.
I find no significant difference in startup time between python and
mzscheme.
My preliminary results in this very important benchmark indicates that
python performs equally well to the two benchmarked Common Lisps:

200 bjorn at nex:~> time for ((i=0; i<100; i++)); do lisp -noinit -eval '(quit)'; done

real 0m2,24s
user 0m1,36s
sys 0m0,83s
201 bjorn at nex:~> time for ((i=0; i<100; i++)); do lisp -noinit -eval '(quit)'; done

real 0m2,24s
user 0m1,39s
sys 0m0,82s
202 bjorn at nex:~> time for ((i=0; i<100; i++)); do clisp -q -x '(quit)'; done

real 0m2,83s
user 0m1,74s
sys 0m1,03s
203 bjorn at nex:~> time for ((i=0; i<100; i++)); do clisp -q -x '(quit)'; done

real 0m2,79s
user 0m1,67s
sys 0m1,09s
204 bjorn at nex:~> time for ((i=0; i<100; i++)); do python -c exit; done

real 0m2,41s
user 0m1,85s
sys 0m0,52s
205 bjorn at nex:~> time for ((i=0; i<100; i++)); do python -c exit; done

real 0m2,41s
user 0m1,89s
sys 0m0,52s

</sarcasm>


Bj?rn

Continue reading on narkive:
Loading...