Discussion:
Python syntax in Lisp and Scheme
(too old to reply)
Kenny Tilton
2003-10-03 13:52:07 UTC
Permalink
It's be interesting to know where people got the idea of learning
Scheme/LISP from (apart from compulsory university courses)?
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
That recently got repotted from another cliki and it's a little mangled,
but until after ILC2003 I am a little too swamped to clean it up.
Me and my big mouth. Now that I have adevrtised the survey far and wide,
and revisited it and seen up close the storm damage, sh*t, there goes
the morning. :) Well, I needed a break from RoboCells:

http://sourceforge.net/projects/robocells/

I am going to do what I can to fix up at least the road categorization,
and a quick glance revealed some great new entries, two that belong in
my Top Ten (with apologies to those getting bumped).

kenny
Bengt Richter
2003-10-08 22:12:31 UTC
Permalink
You know I think that this thread has so far set a comp.lang.* record
for civilitiy in the face of a massively cross-posted language
comparison thread. I was even wondering if it was going to die a quiet
death, too.
Ah well, We all knew it was too good to last. Have at it, lads!
Common Lisp is an ugly language that is impossible to understand with
crufty semantics
Scheme is only used by ivory-tower academics and is irerelevant to
real world programming
Python is a religion that worships at the feet of Guido vanRossum
combining the syntactic flaws of lisp with a bad case of feeping
creaturisms taken from languages more civilized than itself
There. Is everyone pissed off now?
No, that seems about right.
LOL ;-)

Regards,
Bengt Richter
Thomas F. Burdick
2003-10-08 06:24:35 UTC
Permalink
In article <xcvpth8rcfh.fsf at famine.ocf.berkeley.edu>,
I find the Lisp syntax hardly readable when everything looks alike,
mostly words and parentheses, and when every level of nesting requires
parens. I understand that it's easier to work with by macros, but it's
harder to work with by humans like I.
You find delimited words more difficult than symbols? For literate
people who use alphabet-based languages, I find this highly suspect.
Maybe readers of only ideogram languages might have different
preferences, but we are writing in English here...
well, there are a few occasions where symbols are preferrable. just
imagine mathematics with words only
Oh, certainly. Unlike most languages, Lisp lets you use symbols for
your own names (which is easily abused, but not very often). A bad
example:

;; Lets you swear in your source code, cartoonishly
(define-symbol-macro $%^&!
(error "Aw, $%^&! Something went wrong..."))

;; An example use
(defun foo (...)
(cond
...
(t $%^&!)))

And, although you generally use symbols from the KEYWORD package for
keyword arguments, you don't have to, and they don't have to be words:

(defgeneric convert-object (object new-type)
(:documentation "Like an extensible COERCE."))

(defun convert (object &key ((-> to)))
"Sugary"
(convert-object object to))

(defconstant -> '-> "More sugar")

;; Example usage
(convert *thing* -> (class-of *other-thing*))

Of course, these are lame examples, but they show that Lisp *can*
incorporate little ascii-picture-symbols. Good examples would
necessarily be very domain-dependant.
--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
Alex Martelli
2003-10-12 17:06:52 UTC
Permalink
[quantum programming]
While an interesting topic, it's something I'm not going to worry about.
Me neither, for now.
And if I did, it would be in Python ;)
I suspect no existing language would be anywhere near adequate.
But if any current programming concept could stretch there, it might
be that of "unrealized until looked-into set of things", as in, Haskell's
"lazy" (nonstrict) lists. Now lists are sequential and thus quantumly
inappropriate, but perhaps it's a start.
I bring it up as a counter-example to the idea that all modes of
programming have been and can be explored in a current Lisp.
I conjectured one interesting possibility -- that of handling ensembles
of possible solutions to a given problem.
I suspect we may have to map the 'ensembles' down to sets of
items, just as we generally map concurrency down to sets of
sequential actions, in order to be able to reason about them (though
I have no proof of that conjecture). IF you have to map more
complicated intrinsics down to sequential, deterministic, univocal
"things", I'm sure you could do worse than Lisp. As to whether
that makes more sense than dreaming up completely different
languages having (e.g.) nondeterminism or multivocity as more
intrinsic concepts, I pass: it depends mostly on what human beings
will find they need to use in order to reason most effectively in
this new realm -- and quite likely different humans will find they
have different needs in this matter.
In retrospect I should have given a more obvious possibility.
As some point I hope to have computer systems I can program
by voice in English, as in "House? Could you wake me up
at 7?" That is definitely a type of programming, but Lisp is
Yeah, well, I fear the answer will be yes (it could), but it won't
do so since you haven't _asked_ it to wake you up, only if it
could. ME, I definitely don't want to use natural language with
all of its ambiguity for anything exept communicating with
other human beings, thankyouverymuch.
a language designed for text, not speed.
*blink* what does THAT doubtful assertion have to do with anything
else we were discussing just now...? I think lisp was designed for
lists (as opposed to, say, snobol, which WAS "designed for text") and
that they're a general enough data structure (and supplemented in
today's lisps with other good data structures) that they'll be quite good
for all kinds of 'normal' (deterministic &c) programming. As for speed,
I'm sure it's easier to get it out of lisp than out of python right now.
So what's your point, and its relation to the above...?
I believe it is an accepted fact that uniformity in GUI design is a good
thing because users don't need to learn arbitrarily different ways of
using different programs. You only need different ways of interaction
when a program actually requires it for its specific domain.
Yes, I agree this IS generally accepted (with, of course, some dissenters,
but in a minority).
My spreadsheet program looks different from my word processor
Sure, so do mine, but their menus are quite similar -- in as much as
it makes sense for them to have similar operations -- and ditto ditto
for their toolbars, keyboard shortcuts, etc etc. I.e. the differences
only come "when needed for a specific domain" just as Pascal just
said. So I don't know what you're intending with this answer.
is more in common. Still, the phrase "practicality beats purity" is
seems appropriate here.
Uniformity is more practical than diversity: e.g. ctrl-c as the Copy
operation everywhere means my fingers, even more than my brain, get
used to it. If you assign ctrl-c to some totally different operation in
your gui app "because you think it's more practical" you're gonna
drive me crazy, assuming I have to use your app. (That already
happens to me with the -- gnome based i think -- programs using
ctrl-z for minimize instead of undo -- I'm starting to have frayed
nerves about that even for GVIM, one of the programs I use most
often...:-).
I firmly believe people can in general easily handle much more
complicated syntax than Lisp has. There's plenty of room to
spare in people's heads for this subject.
Sure, but is it worth it?
Do you have any doubt to my answer? :)
Given the difficulty I'm having understanding your stance[s] in
this post, I do. My own answer would be that syntax sugar is
in people's head anyway because of different contexts -- infix
arithmetic taught since primary school, indentation in outlines
and pseudocode, etc etc -- so, following that already-ingrained
set of conventions takes no "room to spare in people's heads" --
indeed, the contrary -- it saves them effort. If people's head
started as "tabula rasa" it might be different, but they don't, so
that's a moot issue.

That much being said, I _do_ like prefix syntax. In some cases
I need to sum a+b+c+d and repeating that silly plus rather than
writing (+ a b c d) grates. Or I need to check a<b<c<d and
again I wish I could more summarily write (< a b c d). When I
designed my own language for bridge-hands evaluation, BBL, I
used prefix notation, though in the form operator ( operands )
[which I thought would have been easier for other bridge players
to use], e.g.:

& ( # weak NT opener requires AND of two things:
s ( 1 g 4 3 3 3 # shape 4333 (any), or
2 g 4 4 3 2 # 4432 (any), or
3 3- 3- 3- 5 # 5332 with 5 clubs, or
4 3- 3- 5 3- # 5332 with 5 diamonds
)
< ( 12 # as well as, 13-15 range for
\+ SHDC c( 4 3 2 1 0) # normal Milton-Work pointcount
16
)
)

Maybe readers are starting to understand why I don't WANT to
use a language I design myself;-). Anyway, the language was
NOT enthusiastically taken up, until I wrote code generators with
a GUI accepting conditions in more "ordinary looking" notations
and building this, ahem, intrinsically "better" one;-) -- then, but only
then, did other players start using this to customize hand generators
and the like. (Yes, I did have macros, but puny enough that they
still required operator(operands) syntax -- they basically served only
to reduce duplication, or provide some little abstraction, not to
drastically change the language syntax at all). Ah well -- maybe I
should just put the BBL (Turbo Pascal) implementation and (Italian-
language) reference manual online -- it still moves nostalgia in me!-)
Convenience is what matters. If you are able to conveniently express
solutions for hard problems, then you win. In the long run, it doesn't
My APL experience tells me this is false: conveniently expressing
solutions is HALF the problem -- you (and others!) have to be
able to read them back and maintain and alter them later too.
matter much how things behave in the background, only at first.
Personally, I would love to write equations on a screen like I
would on paper, with integral signs, radicals, powers, etc. and
not have to change my notation to meet the limitations of computer
input systems.
So jot your equations on a tablet-screen and look for a good
enriched text recognition system. What's programming gotta
do with it?
For Lisp is a language tuned to keyboard input and not the full
range of human expression. (As with speech.)
Python even more so on the output side -- try getting a screen-reader to
do a halfway decent job with it. But what does this matter here?
(I know, there are people who can write equations in TeX as
fast as they can on paper. But I'm talking about lazy ol' me
who wants the covenience.)
Or, will there ever be a computer/robot combination I can
teach to dance? Will I do so in Lisp?
You may want to teach by showing and having the computer
infer more general rules from example. Whether the inference
engine will be best built in lisp, prolog, ocaml, mozart, whatever,
I dunno. I don't think it will be optimally done in Python, though.
"Horses for courses" is my philosophy in programming.
It seems to me that in Python, just as in most other languages, you
always have to be aware that you are dealing with classes and objects.
Given the "everything is an object" (classes included) and every object
belongs to a class, you could indeed say that -- in much the same sense
as you may be said to always be aware that you're breathing air in
everyday life. Such awareness is typically very subconscious, of course.
Why should one care? Why does the language force me to see that when it
really doesn't contribute to the solution?
I'm not sure in what sense "python forces you to see" that, e.g.,
the number 2 is an object -- or how can that fail to "contribute to
the solution". Care to exemplify?
Hmmm.. Is the number '1' an object? Is a function an object?
What about a module? A list? A class?
Yes to all of the above, in Python. I don't get your point.
print sum(range(100))
4950
Where in that example are you aware that you are dealing with classes
and objects?
Me? At every step -- I know 'sum' names a builtin object that is a
function (belonging to the class of builtin functions) taking one argument
which is a sequence, 'range' names another builtin object returning
a list object, etc. I'm not directly dealing with any of their classes --
I know they belong to classes, like any object does, but I have no need
to think about them in this specific statement (in fact, I hardly ever do;
signature-based polymorphism is what I usually care about, not class
membership, far more often than not).

But I don't get your points -- neither Andrew's nor Pascal's. How does
this differ from the awareness I might have in some macro-enhanced
lisp where I would type (print (sum (range 100))) or the like?
conjecture is that additional syntax can make some things easier.
That a problem can be solved without new syntax does not
contradict my conjecture.
But even if we provisionally concede your conjecture we are still
left wondering: is the degree of easing so high that it overcomes
the inevitable increase in complication, needed for a language to
have N+1 syntax forms where previously it only had N? I.e., it's
in general a difficult engineering tradeoff, like many in language
design -- which is why I'd rather delegate the decisions on these
tradeoffs to individuals, groups and processes with a proven track
record for making a lot of them with complexive results that I find
delightful, rather than disperse them to myself & many others
(creating lots of not-quite-congruent language dialects).


Alex
Daniel P. M. Silva
2003-10-13 01:49:46 UTC
Permalink
Post by Alex Martelli
Yeah, well, I fear the answer will be yes (it could), but it won't
do so since you haven't _asked_ it to wake you up, only if it
could.
Pshaw. My hypothetical house of the 2050s or so will know
that "could" in this context is a command. :)
Good luck the first time you want to ask it about its capabilities,
and my best wishes that you'll remember to use VERY precise
phrasing then.
Hehe, I hope I never scream "NAMESPACE-MAPPED-SYMBOLS" at my house :P

- DS
David Mertz
2003-10-16 04:14:58 UTC
Permalink
|> Here's a quick rule that is pretty damn close to categorically true for
|> Python programming: If you use more than five levels of indent, you are
|> coding badly. Something is in desperate need of refactoring.

Pascal Bourguignon <spam at thalassa.informatimago.com> wrote previously:
|Here is an histogram of the depths of the top level sexps found in my
|emacs sources:
|((1 . 325) (2 . 329) (3 . 231) (4 . 163) (5 . 138) (6 . 158) (7 . 102)
| (8 . 94) (9 . 63) (10 . 40) (11 . 16) (12 . 20) (13 . 9) (14 . 4)
| (15 . 5) (16 . 4) (17 . 2) (19 . 2) (23 . 1))
|Am I depraved in writting code with depths down to 23?

As I've written lots of times in these threads, I haven't really used
Lisp. In fact, I really only did my first programming in Scheme (for an
article on SSAX) in the last couple weeks; I know Scheme isn't Common
Lisp, no need to point that out again. However, I -have- read a fair
number of snippets of Lisp code over the years, so my familiarity runs
slightly deeper than the couple weeks.

All that said, my gut feeling is that depth 23 is, indeed, ill-designed.
Even the more common occurrences of 12 or 13 levels seems like a lot
more than my brain can reason about. I'm happy to stipulate that
Bourguignon is smarter than I am... but I'm still confident that I can
do this sort of thinking better than 95% of the people who might have to
READ his code.

And the purpose of code, after all, is to COMMUNICATE ideas: firstly to
other people, and only secondly to machines.

|Ok, in Python, you may have also expressions that will further deepen
|the tree, but how can you justify an artificial segregation between
|indentation and sub-expressions?

Because it's Python! There is a fundamental syntactic distinction
between statments and expressions, and statements introduce blocks
(inside suites, bare expressions can occur though--usually functions
called for their side effects). It is the belief of the BDFL--and of
the large majority of programmers who use Python--that a syntactic
distinction between blocks with relative indention and experessions
that nest using parens and commas AIDS READABILITY.

I can say experientially, and from reading and talking to other
Pythonistas, that my brain does a little flip when it finishes
identifying a suite, then starts identifying the parts of an expression.
And conveniently, in Python, the sort of thinking I need to do when I
read or write the lines of a suite is a bit different than for the parts
of an expression. Not just because I am deceived by the syntax, but
because the language really does arrange for a different sort of thing
to go on in statements versus expressions (obviously, there are SOME
overlaps and equivalences; but there's still a useful pattern to the
distinction).

Still, for a real comparison of depth, I suppose I'd need to look at the
combined depth of indent+paren-nesting. Even for that, well-designed
Python programs top out at 7 or 8, IMO. Maybe something a little deeper
reasonably occurs occassionally, but the histogram would sure look
different from Pascal's ("Flat is better than nested").

Yours, David...

--
---[ to our friends at TLAs (spread the word) ]--------------------------
Iran nuclear neocon POTUS patriot Pakistan weaponized uranium invasion UN
smallpox Gitmo Castro Tikrit armed revolution Carnivore al-Qaeda sarin
---[ Gnosis Software ("We know stuff") <mertz at gnosis.cx> ]---------------
Pascal Costanza
2003-10-09 13:59:24 UTC
Permalink
you can use macros to do everything one could use HOFs for (if you
really want).
I should have added: As long as it should execute at compile time, of
course.
Really? What about arbitrary recursion?
I don't see the problem. Maybe you have an example? I am sure the
Lisp'ers here can come up with a macro solution for it.
I'm not terribly familiar with the details of Lisp macros but since
recursion can easily lead to non-termination you certainly need tight
restrictions on recursion among macros in order to ensure termination of
macro substitution, don't you? Or at least some ad-hoc depth limitation.
The Lisp mindset is not to solve problems that you don't have.

If your code has a bug then you need to debug it. Lisp development
environments provide excellent debugging capabilities out of the box.
Don't guess how hard it is when you don't have the experience yet.


Pascal
--
Pascal Costanza University of Bonn
mailto:costanza at web.de Institute of Computer Science III
http://www.pascalcostanza.de R?merstr. 164, D-53117 Bonn (Germany)
David Eppstein
2003-10-22 03:33:23 UTC
Permalink
In article <1jclovopokeogrdajo6dfmhm090cdllfki at 4ax.com>,
It's certainly true that mathematicians do not _write_
proofs in formal languages. But all the proofs that I'm
aware of _could_ be formalized quite easily. Are you
aware of any counterexamples to this? Things that
mathematicians accept as correct proofs which are
not clearly formalizable in, say, ZFC?
I am not claiming that it is a counterexample, but I've always met
with some difficulties imagining how the usual proof of Euler's
theorem about the number of corners, sides and faces of a polihedron
(correct terminology, BTW?) could be formalized. Also, however that
could be done, I feel an unsatisfactory feeling about how complex it
would be if compared to the conceptual simplicity of the proof itself.
Which one do you think is the usual proof?
http://www.ics.uci.edu/~eppstein/junkyard/euler/

Anyway, this exact example was the basis for a whole book about what is
involved in going from informal proof idea to formal proof:
http://www.ics.uci.edu/~eppstein/junkyard/euler/refs.html#Lak
--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
Alex Martelli
2003-10-04 19:48:54 UTC
Permalink
Bengt Richter wrote:
...
I like the Bunch class, but the name suggests vegetables to me ;-)
Well, I _like_ vegetables...
BTW, care to comment on a couple of close variants of Bunch with
per-object class dicts? ...
def mkNSC(**kwds): return type('NSC', (), kwds)()
Very nice (apart from the yecchy name;-).
or, stretching the one line a bit to use the instance dict,
def mkNSO(**kwds): o=type('NSO', (), {})(); o.__dict__.update(kwds);
return o
I don't see the advantage of explicity using an empty dict and then
updating it with kwds, vs using kwds directly.
I'm wondering how much space is actually wasted with a throwaway class. Is
there a lazy copy-on-write kind of optimization for class and instance
dicts that prevents useless proliferation? I.e.,
I strongly doubt there's any "lazy copy-on-write" anywhere in Python.
The "throwaway class" will be its dict (which, here, you need -- that's
the NS you're wrapping, after all) plus a little bit (several dozen bytes
for the typeobject, I'd imagine); an instance of Bunch, probably a bit
smaller. But if you're going to throw either away soon, who cares?
but I think the "purer" (more extreme) versions are
interesting "tipizations" for the languages, anyway.
Oh goody, a new word (for me ;-). Would you define "tipization"?
I thought I was making up a word, and slipped by spelling it
as in Italiano "tipo" rather than English "type". It appears
(from Google) that "typization" IS an existing word (sometimes
mis-spelled as "tipization"), roughly in the meaning I intended
("characterization of types") -- though such a high proportion
of the research papers, institutes, etc, using "typization",
seems to come from Slavic or Baltic countries, that I _am_
left wondering...;-).


Alex
Alan Crowe
2003-10-04 11:31:45 UTC
Permalink
If a set of macros could be written to improve LISP
syntax, then I think that might be an amazing thing. An
interesting question to me is why hasn't this already been
done.
I think the issue is the grandeur of the Lisp vision. More
ambitious projects require larger code bases. Ambition is
hard to quantify. Nevertheless one must face the issue of
scaling. Does code size go as the cube of ambition, or is it
the fourth power of ambition? Or something else entirely.

Lisp aspires to change the exponent, not the constant
factor. The constant factor is very important. That is why
CL has FILL :-) but shrinking the constant factor has been
done (and with C++ undone).

Macros can be used to abbreviate code. One can spot that one
is typing similar code over and over again. One says
"whoops, I'm typing macro expansions". Do you use macros to
tune the syntax, so that you type N/2 characters instead of
N characters, or do you capture the whole concept in macro
and eliminate the repetition altogether?

The point is that there is nowhere very interesting to go
with syntax tuning. It is the bolder goal of changing the
exponent, and thus seriously enlarging the realm of the
possible, that excites.

Alan Crowe
Ingvar Mattsson
2003-10-09 11:16:09 UTC
Permalink
method overloading,
Joe> Now I'm *really* confused. I thought method overloading involved
Joe> having a method do something different depending on the type of
Joe> arguments presented to it. CLOS certainly does that.
He probably means "operator overloading" -- in languages where
there is a difference between built-in operators and functions,
their OOP features let them put methods on things like "+".
Lisp doesn't let you do that, because it turns out to be a bad idea.
When you go reading someone's program, what you really want is for
the standard operators to be doing the standard and completely
understood thing.
Though if one *really* wants to have +, -, * and / as generic
functions, I imagine one can use something along the lines of:

(defpackage "GENERIC-ARITHMETIC"
(:shadow "+" "-" "/" "*")
(:use "COMMON-LISP"))

(in-package "GENERIC-ARITHMETIC")
(defgeneric arithmetic-identity (op arg))

(defmacro defarithmetic (op)
(let ((two-arg
(intern (concatenate 'string "TWO-ARG-" (symbol-name op))
"GENERIC-ARITHMETIC"))
(cl-op (find-symbol (symbol-name op) "COMMON-LISP")))
`(progn
(defun ,op (&rest args)
(cond ((null args) (arithmetic-identity ,op nil))
((null (cdr args))
(,two-arg (arithmetic-identity ,op (car args))
(car args)))
(t (reduce (function ,two-arg)
(cdr args)
:initial-value (car args)))))
(defgeneric ,two-arg (arg1 arg2))
(defmethod ,two-arg ((arg1 number) (arg2 (number)))
(,cl-op arg1 arg2)))))

Now, I have (because I am lazy) left out definitions of the generic
function ARITHMETIC-IDENTITY (general idea, when fed an operator and
NIL, it returns the most generic identity, when fed an operator and an
argument, it can return a value that is more suitable) and there's
probably errors in the code, too.

But, in principle, that should be enough of a framework to build from,
I think.

//Ingvar
--
My posts are fair game for anybody who wants to distribute the countless
pearls of wisdom sprinkled in them, as long as I'm attributed.
-- Martin Wisse, in a.f.p
Edi Weitz
2003-10-16 21:08:43 UTC
Permalink
For simple use of built-in libraries,
http://online.effbot.org/2003_08_01_archive.htm#troll
looks like a good test case.
Quick hack follows.

edi at bird:/tmp > cat troll.lisp
(asdf:oos 'asdf:load-op :aserve)
(asdf:oos 'asdf:load-op :cl-ppcre)

(defparameter *scanner*
(cl-ppcre:create-scanner
"<a href=\"AuthorThreads.asp[^\"]*\">([^<]+)</a></td>\\s*
<td align=\"center\">[^<]+</td>\\s*
<td align=\"center\">[^<]+</td>\\s*
<td align=\"center\">\\d+</td>\\s*
<td align=\"center\">(\\d+)</td>\\s*
<td align=\"center\">(\\d+)</td>\\s*
<td align=\"center\">\\d+</td>\\s*
<td align=\"center\">(\\d+)</td>\\s*"))

(defun troll-checker (name)
(let ((target
(net.aserve.client:do-http-request
(format nil "http://netscan.research.microsoft.com/Static/author/authorprofile.asp?searchfor=~A" name)
:protocol :http/1.0)))
(cl-ppcre:do-scans (match-start match-end reg-starts reg-ends *scanner* target)
(flet ((nth-group (n)
(subseq target (aref reg-starts n) (aref reg-ends n))))
(let* ((group (nth-group 0))
(posts (parse-integer (nth-group 1)))
(replies (parse-integer (nth-group 2)))
(threads-touched (parse-integer (nth-group 3)))
(reply-to-post-ratio (/ replies posts))
(threads-to-post-ratio (/ threads-touched posts)))
(unless (< posts 10)
(format t "~55A R~,2F T~,2F ~:[~;TROLL~:[?~;!~]~]~%"
(subseq group 0 (min 55 (length group)))
reply-to-post-ratio
threads-to-post-ratio
(and (> reply-to-post-ratio .8)
(< threads-to-post-ratio .4))
(< threads-to-post-ratio .2))))))))

(compile 'troll-checker)

edi at bird:/tmp > cmucl
; Loading #p"/home/edi/.cmucl-init".
CMU Common Lisp 18e, running on bird.agharta.de
With core: /usr/local/lib/cmucl/lib/lisp.core
Dumped on: Thu, 2003-04-03 15:47:12+02:00 on orion
Send questions and bug reports to your local CMUCL maintainer,
or see <http://www.cons.org/cmucl/support.html>.
Loaded subsystems:
Python 1.1, target Intel x86
CLOS 18e (based on PCL September 16 92 PCL (f))
* (load "troll")

; loading system definition from /usr/local/lisp/Registry/aserve.asd into
; #<The ASDF1017 package, 0/9 internal, 0/9 external>
; registering #<SYSTEM ASERVE {4854AEF5}> as ASERVE
; loading system definition from /usr/local/lisp/Registry/acl-compat.asd into
; #<The ASDF1059 package, 0/9 internal, 0/9 external>
; registering #<SYSTEM ACL-COMPAT {4869AD35}> as ACL-COMPAT
; loading system definition from /usr/local/lisp/Registry/htmlgen.asd into
; #<The ASDF1145 package, 0/9 internal, 0/9 external>
; registering #<SYSTEM HTMLGEN {487E64C5}> as HTMLGEN
; loading system definition from /usr/local/lisp/Registry/cl-ppcre.asd into
; #<The ASDF1813 package, 0/9 internal, 0/9 external>
; registering #<SYSTEM #:CL-PPCRE {48F32835}> as CL-PPCRE
; Compiling LAMBDA (NAME):
; Compiling Top-Level Form:
T
* (troll-checker "edi at agharta.de")
comp.lang.lisp R0.93 T0.63
NIL
* (troll-checker "eppstein at ics.uci.edu")
rec.photo.digital R1.00 T0.76
rec.arts.sf.written R0.99 T0.57
comp.lang.python R0.98 T0.64
rec.photo.equipment.35mm R1.00 T0.73
sci.math R1.00 T0.77
rec.puzzles R1.00 T0.75
comp.theory R1.00 T0.56
comp.graphics.algorithms R1.00 T0.87
comp.sys.mac.apps R1.00 T0.69
NIL
* (troll-checker "spam at thalassa.informatimago.com")
comp.lang.lisp R0.91 T0.44
fr.comp.os.unix R1.00 T0.70
es.comp.os.linux.programacion R1.00 T0.67
fr.comp.lang.lisp R1.00 T0.40 TROLL?
comp.unix.programmer R1.00 T0.92
sci.space.moderated R1.00 T0.43
gnu.emacs.help R0.95 T0.84
sci.space.policy R1.00 T0.33 TROLL?
alt.folklore.computers R1.00 T0.43
comp.lang.scheme R0.83 T0.58
fr.comp.os.mac-os.x R0.92 T0.83
NIL

Granted, Portable AllegroServe[1] and CL-PPCRE[2] aren't "built-in"
(but freely available, compatible with various CL compilers, and easy
to install) and Python might have a bit more syntactic sugar but it
wasn't _too_ hard to do that in Lisp.

Edi

[1] <http://portableaserve.sf.net/>
[2] <http://weitz.de/cl-ppcre/>
Dave Benjamin
2003-10-09 03:01:02 UTC
Permalink
For instance, I always thought this was a cooler alternative to the
try/finally block to ensure that a file gets closed (I'll try not to
open('input.txt', { |f|
do_something_with(f)
do_something_else_with(f)
})
f = open('input.txt')
do_something_with(f)
do_something_else_with(f)
f.close()
"Explicit is better than implicit"
In that case, why do we eschew code blocks, yet have no problem with the
implicit invocation of an iterator, as in:

for line in file('input.txt'):
do_something_with(line)

This is not to say that I dislike that behavior; in fact, I find it
*beneficial* that the manner of looping is *implicit* because you can
substitute a generator for a sequence without changing the usage. But
there's little readability difference, IMHO, between that and:

file('input.txt').each_line({ |line|
do_something_with(line)
})

Plus, the first example is only obvious because I called my iteration
variable "line", and because this behavior is already widely known. What
if I wrote:

for byte in file('input.dat'):
do_something_with(byte)

That would be a bit misleading, no? But the mistake isn't obvious. OTOH,
in the more explicit (in this case) Ruby language, it would look silly:

open('input.txt').each_line { |byte|
# huh? why a byte? we said each_line!
}

I think this is important to point out, because the implicit/explicit
rule comes up all the time, yet Python is implicit about lots of things!
To name a few:

- for loops and iterators
- types of variables
- dispatching via subclass polymorphism
- coercion (int->float, int->long...)
- exceptions (in contrast with Java's checked exceptions)
- __magic_methods__
- metaclasses
- nested scopes (compared to yesteryear's lambda x, y=y, z=z: ...)
- list comprehensions

In all of the above cases (with a bit of hesitation toward the voodoo of
metaclasses) I think Python is a better language for it. On the other
hand, Perl's implicit $_ variable is a good example of the hazards of
implicitness; that can be downright confusing. So, it's not cut and dry
by any means.

If all you're saying is that naming something is better than not naming
something because explicit is better than implicit, I'd have to ask why:

a = 5
b = 6
c = 7
d = a + b
e = c / 2
result = d + e
return result

Is any better than:

...
return (a + b) + (c / 2)

To me, it's the same issue. Why should I have to name something that I'm
just going to return in the next statement, or pass as a parameter, and
then be done with it? Does that really increase either readability or
understandability? Why should I name something that I'm not going to ask
for later?
Even your example clearly shows that try block is much more readable and
understandable.
That's why it's being considered evil by majority of python developers.
Readability is a moving target. I think that the code block syntax
strikes a nice balance between readability and expressiveness. As far as
what the majority of Python developers consider evil, I don't think
we've got the stats back on that one.
But the anonymous version still looks more concise to me.
Python prioritize things diferently than other languages.
It's not an APL. "Readability counts"
This is nothing like APL... if anything, it's like Smalltalk, a language
designed to be readable by children! I realize that APL sacrificed
readability for expressiveness to an uncomfortable extreme, but I really
think you're comparing apples and oranges here. List comprehensions are
closer to APL than code blocks.

Dave
Raymond Wiker
2003-10-06 11:09:00 UTC
Permalink
1.) Inventing new control structures (implement lazy data structures,
implement declarative control structures, etc.)
=> This one is rarely needed in everyday application programming and
can easily be misused.
This is, IMHO, wrong. One particular example is creating
macros (or read macros) for giving values to application-specific data
structures.
You have to know if you want a sharp knife (which may hurt you when
misused) or a less sharper one (where it takes more effort to cut
with).
It is easier to hurt yourself with a blunt knife than a sharp
one.
--
Raymond Wiker Mail: Raymond.Wiker at fast.no
Senior Software Engineer Web: http://www.fast.no/
Fast Search & Transfer ASA Phone: +47 23 01 11 60
P.O. Box 1677 Vika Fax: +47 35 54 87 99
NO-0120 Oslo, NORWAY Mob: +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
Thomas F. Burdick
2003-10-06 22:13:36 UTC
Permalink
Post by Raymond Wiker
(In certain cases macros) can easily be misused.
...
Post by Raymond Wiker
You have to know if you want a sharp knife (which may hurt you when
misused) or a less sharper one (where it takes more effort to cut
with).
It is easier to hurt yourself with a blunt knife than a sharp
one.
Actually I've noticed that I usually cut myself when I *switch* from
a dull knife to a sharp one.
I don't think the truism about cutting yourself with dull vs sharp
knives means to say anything about superficial cuts. You don't lop
your finger off with a sharp knife, because you're handling it
carefully. With a dull knife, your best bet is to put your weight
behind it; that's also a good way to lose a finger / dump core.
--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
Corey Coughlin
2003-10-11 00:00:48 UTC
Permalink
You are mostly correct about Japanese, I took a year of it in college
and it is a fairly standard SOV language. (Like Latin, oddly enough.)
And I'm sure you're right about RPN vs. PN, I always get those
confused. Which is kind of the point, really, having studied math
since I was a kid I got used to stuff like "y = mx + b", can you
blame me if I have an easier time with "y = m*x + b" as opposed to
"(let y (+ (* m x) b))" (Forgive me if the parenthesis on that are
off, the newsreader editor doesn't match them, and maybe I need a
'setq' instead of a 'let' or some other thing, I'm not entirely sure.)
(And again, is the point getting more clear?) And thanks for backing
me up on car's and cdr's, I never could understand why a language
ostensibly designed for 'list processing' has such a bizarre way to
reference items in a list. But is (nth 10 mylist) really as easy as
mylist[10]? My intuition says no, not really.

Sure, I can appreciate looking at things in different ways, and it is
nice to learn new things and see how they apply. But if John Grisham
learns Japanese, does that mean he should write all his books in
Japanese? Or should he stick to English? I suppose if I were a real
CS guy (I'm actually an electrical engineer, the scheme course was one
of the two CS courses I took in college, so I'm mostly self taught) or
if I worked within a big group of Lisp programmers, I would probably
feel more comfortable with it. Since I now mostly work as an isolated
programmer for other engineers, and the last language I was using for
everything was C, Python is a huge improvement, and it doesn't give me
too much of a headache. Sure, it's not perfect. But there's no way
I'm going to adopt Lisp as a perfect language anytime soon. That is,
if I want to keep hitting my deadlines and getting paid. And sure, I
may get comfortable and miss out on cool stuff, but on the upside,
I'll be comfortable.

Oh, and if I'm writing in this thread, I suppose I should comment on
how bad lisp macros are. Except I know nothing about them. But it
seems like most languages have dark corners like that, where you can
do thing above and beyond your standard programming practices. Python
has metaclasses, which give me a headache most of the time, so I don't
really use them at all. But I seem to get plenty of stuff done
without using them, so it works for me. If you really have to use
macros in Lisp to get things done, that sounds kind of troublesome,
but it would be consistent, it always seemed like really working well
in Lisp requires you to really know how everything works all at once,
which always kind of struck me as kind of a downside. But as I said,
I'm not the big CS guru, so Lisp just may not be for me in general.
Ah well, I suppose I'll get by with Python. :D
(Not to mention car, cdr, cadr, and
so on vs. index notation, sheesh.)
Yes, that is a real regret. It should have been useful to support
a kind of (nth 10 mylist) straight from the Scheme standard library.
Using parentheses and rpn everywhere makes lisp very easy
to parse, but I'd rather have something easy for me to understand and
That's why I prefer python, you
get a nice algebraic syntax with infix and equal signs, and it's easy
understand.
Python is
intuitive to me out of the box, and it just keeps getting better, so I
think I'll stick with it.
First, a minor correction: Lisp/Scheme is like (* 1 2) and that is
Polish Notation or prefix; Reverse Polish Notation or postfix would be
like (1 2 *).
From what I heard about the Japanese language I have formed the
possibly oversimplified impression that it is largely postfix.
Whereas in English we say "I beat you", they may say something like "I
you beat". So I suppose all of the existing programming notations -
Lisp's and Cobol's (* 1 2) and MULTIPLY 1 BY 2, Fortran's "intuitive"
1+2, and OO's one.add(two) - are very counterintuitive to them, and
they would really like the way of HP calculators, no?
And I suppose the ancient Romans (and even the modern Vaticans) would
laugh at this entire dilemma (or trilemma?) between ___fixes.
Intuition is acquired. It is purely a product of education or
brainwashing. There is nothing natural about it. And since it is
acquired, you may as well keep acquiring new intuitions and see more
horizons, rather than keep reinforcing old intuitions and stagnate.
Appreciating a foreign language such as Japanese some day is not a bad
idea.
Raffael Cavallaro
2003-10-12 19:54:59 UTC
Permalink
Lispniks are driven by the assumption that there is always the
unexpected. No matter what happens, it's a safe bet that you can make
Lisp behave the way you want it to behave, even in the unlikely event
that something happens that no language designer has ever thought of
before. And even if you cannot find a perfect solution in some cases,
you will at least be able to find a good approximation for hard
problems.
This I believe is the very crux of the matter. The problem domain to
which lisp has historically been applied, artificial intelligence,
more or less guaranteed that lisp hackers would run up against the
sorts of problems that no one had ever seen before. The language
therefore evolved into a "programmable programming language," to quote
John Foderaro (or whoever first said or wrote this now famous line).

Lisp gives the programmer who knows he will be working in a domain
that is not completely cut and dried, the assurance that his language
will not prevent him for doing something that has never been done
before. Python gives me the distinct impression that I might very well
run up against the limitations of the language when dealing with very
complex problems.

For 90% of tasks, even large projects, Python will certainly have
enough in its ever expanding bag of tricks to provide a clean,
maintainable solution. But that other 10% keeps lisp hackers from
using Python for exploratory programming - seeking solutions in
problem domains that have not been solved before.
Andrew Dalke
2003-10-09 18:45:47 UTC
Permalink
i realize that this thread is hopelessly amorphous, but this post did
introduce some concrete issues which bear concrete responses...
Thank you for the commentary.
i got only as far as the realization that, in order to be of any use,
unicode
data management has to support the eventual primitive string operations.
which
introduces the problem that, in many cases, these primitive operations
eventually devolve to the respective os api. which, if one compares apple
and
unix apis are anything but uniform. it is simply not possible to provide
them
with the same data and do anything worthwhile. if it is possible to give
some
concrete pointers to how other languages provide for this i would be
grateful.

Python does it by ignoring the respective os APIs, if I understand
your meaning and Python's implementation correctly. Here's some
more information about Unicode in Python

http://www.python.org/peps/pep-0100.html
http://www.python.org/peps/pep-0261.html
http://www.python.org/peps/pep-0277.html

http://www.python.org/doc/current/ref/strings.html

http://www.python.org/doc/current/lib/module-unicodedata.html
http://www.python.org/doc/current/lib/module-codecs.html
and i have no idea what people do with surrogate pairs.
See PEP 261 listed above for commentary, and you may want
to email the author of that PEP, Paul Prescod. I am definitely
not the one to ask.
yes, there are several available common-lisp implementations for http
clients
and servers. they offer significant trade-offs in api complexity,
functionality, resource requirements and performance.
And there are several available Python implementations for the same;
Twisted's being the most notable. But the two main distributions (and
variants like Stackless) include a common API for it, which makes
it easy to start, and for most cases is sufficient.

I fully understand that it isn't part of the standard, but it would be
useful if there was a consensus that "packages X, Y, and Z will
always be included in our distributions."
if one needs to _port_ it to a new lisp, yes. perhaps you skipped over the
list of lisps to which it has been ported. if you look at the #+/-
conditionalization, you may observe that the differences are not
significant.

You are correct, and I did skip that list.

Andrew
dalke at dalkescientific.com
Edi Weitz
2003-10-10 09:13:03 UTC
Permalink
Conjecture: Is it that the commericial versions of Lisp pull away
some of the people who would otherwise help raise the default
functionality of the free version? I don't think so... but then
why?
I'm pretty sure this is the case. If you lurk in c.l.l for a while
you'll see that a large part of the regular contributors aren't what
you'd call Open Source zealots. Maybe that's for historical reasons, I
don't know. But of course that's different from languages which have
always been "free" like Perl or Python.

To give one example: One of the oldest dynamic HTTP servers out there
is CL-HTTP.[1] I think it is very impressive but it has a somewhat
dubious license which doesn't allow for commercial deployment - it's
not "free." Maybe, I dunno, it would have a much higher market share
if it had been licensed like Apache when it was released. I'm sure
there's much more high-quality Lisp software out there that hasn't
even been released.

Edi.

[1] <http://www.ai.mit.edu/projects/iiip/doc/cl-http/home-page.html>

Don't be fooled by the "Last updated" line. There's still active
development - see

<ftp://ftp.ai.mit.edu/pub/users/jcma/cl-http/devo>.
David Eppstein
2003-10-10 02:10:18 UTC
Permalink
In article <bm4uf6$6oj$1 at newsreader2.netcologne.de>,
It's probably just because the Common Lisp community is still relatively
small at the moment. But this situation has already started to improve a
lot.
It's only been out, what, twenty years? And another twenty before that
for other lisps... How much time do you think you need?
--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
MetalOne
2003-10-15 18:05:16 UTC
Permalink
Raffael Cavallaro

I don't know why but I feel like trying to summarize.

I initially thought your position was that lambdas should never be
used. I believe that Brian McNamara and Ken Shan presented powerful
arguments in support of lambda. Your position now appears to have
changed to state that lambdas are ok to use, but their use should be
restricted. One point would appear to desire avoiding duplicate
lambdas. This makes sense. Duplication of this sort is often found
in "if statment" conditional tests also. The next point would be to
name the function if a good name can be found. I believe that
sometimes the code is clearer than a name. Mathematical notation was
invented because natural language is imprecise. Sometimes a name is
better than the code. The name gives a good idea of the "how" and
perhaps you can defer looking at the "how". Sometimes I think using
code in combination with a comment is better. A comment can say a
little more than a name, and the code gives the precision. So as
Marcin said, it is a balancing act to create readable code.

I would like to say that I have found this entire thread very
comforting. I have been programming for 18 years now. For the most
part, when I read other peoples code I see nothing but 300+ line
functions. I have come to feel like most programmer's have no idea
what they are doing. But when you're writing small functions and
everybody else is writing 300+ line functions you begin to wonder if
it is you that is doing something wrong. It is nice to see that other
people actually do think about how to write and structure good code.
Erann Gat
2003-10-06 19:19:54 UTC
Permalink
In article <eppstein-9700A3.10461306102003 at news.service.uci.edu>, David
In article
<my-first-name.my-last-name-0610030955090001 at k-137-79-50-101.jpl.nasa.go
v>,
: (with-collector collect
: (do-file-lines (l some-file-name)
: (if (some-property l) (collect l))))
: This returns a list of all the lines in a file that have some property.
OK, that's _definitely_ just a filter: filter someproperty somefilename
Perhaps throw in a fold if you are trying to abstract "collect".
The net effect is a filter, but again, you need to stop thinking about the
"what" and start thinking about the "how", otherwise, as I said, there's
no reason to use anything other than machine language.
Answer 1: literal translation into Python. The closest analogue of
with-collector etc would be Python's simple generators (yield keyword)
yield l
You left out the with-collector part.

But it's true that my examples are less convincing given the existence of
yield (which I had forgotten about). But the point is that in pre-yield
Python you were stuck until the langauge designers got around to adding
it.

I'll try to come up with a more convincing short example if I find some
free time today.

E.
Daniel P. M. Silva
2003-10-08 16:50:39 UTC
Permalink
<posted & mailed>
...
You still can't add new binding constructs or safe parameterizations like
with_directory("/tmp", do_something())
Where do_something() would be evaluated with the current directory set to
" tmp" and the old pwd would be restored afterward (even in the event of
an exception).
with_directory("/tmp", do_something)
*deferring* the call to do_something to within the with_directory
function. Python uses strict evaluation order, so if and when you
choose to explicitly CALL do_something() it gets called,
pwd = os.getcwd()
try: return thefunc(*args, **kwds)
finally: os.chdir(pwd)
this is of course a widespread idiom in Python, e.g. see
unittest.TestCase.assertRaises for example.
The only annoyance here is that there is no good 'literal' form for
a code block (Python's lambda is too puny to count as such), so you
do have to *name* the 'thefunc' argument (with a 'def' statement --
Python firmly separates statements from expressions).
That was my point. You have to pass a callable object to with_directory,
plus you have to save in that object any variables you might want to use,
when you'd rather say:

x = 7
with_directory("/tmp",
print "well, now I'm in ", os.getpwd()
print "x: ", x
x = 3
)
Last year -- I think at LL2 -- someone showed how they added some sort of
'using "filename":' form to Python... by hacking the interpreter.
A "using" statement (which would take a specialized object, surely not
a string, and call the object's entry/normal-exit/abnormal-exit methods)
might often be a good alternative to try/finally (which makes no provision
for 'entry', i.e. setting up, and draws no distinction between normal
and abnormal 'exits' -- often one doesn't care, but sometimes yes). On
this, I've seen some consensus on python-dev; but not (yet?) enough on
the details. Consensus is culturally important, even though in the end
Guido decides: we are keen to ensure we all keep using the same language,
rather than ever fragmenting it into incompatible dialects.
The point is that the language spec itself is changed (along with the
interpreter in C!) to add that statement. I would be happier if I could
write syntax extensions myself, in Python, and if those extensions worked
on CPython, Jython, Python.Net, Spy, etc.
Some people use Python's hooks to create little languages inside Python
(eg. to change the meaning of instantiation), which are not free of
[invariably spelt as 'self', not 'this', but that's another issue]
this.rest = args
this.keys = kwargs
count[0] = count[0] + 1
return count[0]
obj.object_id = id
return obj
def obj_id(obj): return obj.object_id
tag_obj(object.__new__(type), new_obj_id())))
...
# forgot to check for this case...
print Object(foo="bar")
It's not an issue of "checking": you have written (in very obscure
and unreadable fashion) a callable which you want to accept (and
ignore) keyword arguments, but have coded it in such a way that it
in fact refuses keyword arguments. Just add the **kwds after the
you might forget to specify arguments which you do want your callable
to accept and ignore in a wide variety of other contexts, too.
I think changing the meaning of __new__ is a pretty big language
modification...

- Daniel
Jock Cooper
2003-10-07 01:02:16 UTC
Permalink
Post by Erann Gat
In article <eppstein-9700A3.10461306102003 at news.service.uci.edu>, David
In article
<my-first-name.my-last-name-0610030955090001 at k-137-79-50-101.jpl.nasa.go
v>,
: (with-collector collect
: (do-file-lines (l some-file-name)
: (if (some-property l) (collect l))))
: This returns a list of all the lines in a file that have some property.
OK, that's _definitely_ just a filter: filter someproperty somefilename
Perhaps throw in a fold if you are trying to abstract "collect".
The net effect is a filter, but again, you need to stop thinking about the
"what" and start thinking about the "how", otherwise, as I said, there's
no reason to use anything other than machine language.
Answer 1: literal translation into Python. The closest analogue of
with-collector etc would be Python's simple generators (yield keyword)
yield l
You left out the with-collector part.
But it's true that my examples are less convincing given the existence of
yield (which I had forgotten about). But the point is that in pre-yield
Python you were stuck until the langauge designers got around to adding
it.
I'll try to come up with a more convincing short example if I find some
free time today.
I'm afraid it's very hard to give any convincing examples of the
utility of macros -- as long as you are sticking to trivial examples.
On the other hand, you can't exactly post real world complex examples
of how macros saved you time and LOC (we all have em) because reader's
eyes would just glaze over. I think macros are just another one of
CL's features that some most people just don't get until they actually
use them. But here's a small one:

I wrote about 60 lines worth of macro based code (including a few reader
macros) that allows me to write things like:

(with-dbconnection
(sql-loop-in-rows
"select col1, col2 from somewhere where something"
:my-package row-var "pfx"
...
...some code...
...))

In the "some code" section, the result columns' values are accessed by
#!pfx-colname (eg #!pfx-col1), or directly from row-var using
#?pfx-colname (which returns the position). Also, error handling code
can be automatically included by the macro code. How much time and
effort (and possible bugs) has this saved me? Well at least 60+ lines
or more of boilerplate every time I use this pattern.. Plus the expansions
for #!colname include error checks/warnings etc. -- all hidden from view.

Jock
---
www.fractal-recursions.com
Dave Benjamin
2003-10-09 00:54:14 UTC
Permalink
Yeah, wasn't something like that up on ASPN? That's an interesting
trick... are you sure it's not supposed to be "property(*aprop())"
though? (who's being pedantic now? =)
Hi.
The idiom/recipe is at
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/205183
Thanks, Sean.
Daniel P. M. Silva
2003-10-08 07:11:58 UTC
Permalink
Haven't some people implemented an entire class system as one huge macro?
YES! Been there, done that -- about 3 or 4 times, actually.
I went through a bit of a phase where writing OO implementations
for Scheme was one of my principal hobbies. :-)
Nice! I was alluding to MzScheme's class.ss but I guess that's a fun hobby
to have. :) Do you have your class systems available anywhere to download?
I would be especially interested in them if they allow multiple
inheritance, run-time pillaging of class contracts, and explicit "this"
arguments to methods...
By the way, Scheme was my Favourite Cool Language for quite
a while. Then I discovered Python, and while I still appreciate
all the things about Scheme that I appreciated then, I wouldn't
want to go back to using it on a regular basis now. So it's not
a given that any person who likes Scheme must inevitably dislike
Python!
I do "get" macros, and I appreciate having them available in
languages like Scheme, where they seem to fit naturally. But
I can't say I've missed them in Python, probably because Python
provides enough facilities of its own for constructing kinds of
mini-languages (keyword arguments, operator overloading,
iterators, etc.) to satisfy my needs without having to resort
to macros.
You still can't add new binding constructs or safe parameterizations like a
with_directory form:

with_directory("/tmp", do_something())

Where do_something() would be evaluated with the current directory set to "
tmp" and the old pwd would be restored afterward (even in the event of an
exception).

Last year -- I think at LL2 -- someone showed how they added some sort of
'using "filename":' form to Python... by hacking the interpreter.
And I do regard macros as something that one "resorts" to, for
all the reasons discussed here, plus another fairly major one
that nobody has mentioned: Unless both the macro system and
the macros written in it are *extremely* well designed, error
reporting in the presence of macros of any complexity tends to
be abysmal, because errors get reported in terms of the expansion
of the macro rather than what the programmer originally wrote.
I've yet to encounter this problem in the standard library included with my
Scheme implementation of choice, but ok.
ALL macro systems of any kind that I have ever used have suffered
from this - cpp, C++ templates, Lisp/Scheme macros, TeX,
you name it. I'd hate to see Python grow the same problems.
Some people use Python's hooks to create little languages inside Python (eg.
to change the meaning of instantiation), which are not free of problems:

class Object(object):
def __init__(this, *args, **kwargs):
this.rest = args
this.keys = kwargs

def new_obj_id(count=[0]):
count[0] = count[0] + 1
return count[0]

def tag_obj(obj, id):
obj.object_id = id
return obj

def obj_id(obj): return obj.object_id

type.__setattr__(Object, "__new__", staticmethod(lambda type, *args:
tag_obj(object.__new__(type), new_obj_id())))


Great, now all object instantiations (of our own Object class) will also tag
new objects with an ID:

obj = Object()
print "id: ", obj_id(obj)
print "another id: ", obj_id(Object())

Which gives you 1 and then 2. Hurrah.
Have you caught the bug yet?

# forgot to check for this case...
print Object(foo="bar")

This is of course illegal and I get the following error message:

Traceback (most recent call last):
File "n.py", line 27, in ?
print Object(foo="bar").rest
TypeError: <lambda>() got an unexpected keyword argument 'foo'

Hmm...
Daniel P. M. Silva
2003-10-07 01:53:30 UTC
Permalink
[...] But the point is that in pre-yield
Python you were stuck until the langauge designers got around to adding
it.
I'll try to come up with a more convincing short example if I find some
free time today.
Haven't some people implemented an entire class system as one huge macro?

- Daniel
Greg Ewing (using news.cis.dfn.de)
2003-10-13 02:28:57 UTC
Permalink
It has sometimes been said that Lisp should use first and
rest instead of car and cdr
I used to think something like that would be more logical, too.
Until one day it occurred to me that building lists is only
one possible, albeit common, use for cons cells. A cons cell
is actually a completely general-purpose two-element data
structure, and as such its accessors should have names that
don't come with any preconceived semantic connotations.

From that point of view, "car" and "cdr" are as good
as anything!
--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg
Pascal Costanza
2003-10-15 20:22:13 UTC
Permalink
In article <bmgh32$1a32$1 at f1node01.rhrz.uni-bonn.de>,
...
I think that's the essential point here. The advantage of the names
car and cdr is that they _don't_ mean anything specific.
gdee, you should read early lisp history ;-). car and cdr ha[d|ve] a
very specific meaning
Yes, but noone (noone at all) refers to that meaning anymore. It's a
historical accident that doesn't really matter anymore when developing code.


Pascal
Stephen Horne
2003-10-13 16:28:58 UTC
Permalink
On Mon, 13 Oct 2003 14:08:17 +0200, Pascal Costanza <costanza at web.de>
On Mon, 13 Oct 2003 15:28:57 +1300, "Greg Ewing (using
Post by Greg Ewing (using news.cis.dfn.de)
From that point of view, "car" and "cdr" are as good
as anything!
"left" and "right" - referring to 'subtrees'?
Note: Why break anyone else's code just because you prefer a different
vocabulary?
I wasn't really suggesting a change to lisp - just asking if they
might be more appropriate names.

Actually, I have been having a nagging doubt about this.

I had a couple of phases when I learned some basic lisp, years ago. A
bit at college in the early nineties, and IIRC a bit when I was still
at school in the mid eighties. This was well before common lisp, I
believe.

Anyway, I'd swear 'car' and 'cdr' were considered backward
compatibility words, with the up-to-date words (of the time) being
'head' and 'tail'.

Maybe these are/were common site library conventions that never made
it into any standard?

This would make some sense. After all, 'head' and 'tail' actually
imply some things that are not always true. Those 'cons' thingies may
be trees rather than lists, and even if they are lists they could be
backwards (most of the items under the 'car' side with only one item
on the 'cdr' side) which is certainly not what I'd expect from 'head'
and 'tail'.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Björn Lindberg
2003-10-10 14:16:42 UTC
Permalink
If your problems are trivial, I suppose the presumed lower startup
costs of Python may mark it as a good solution medium.
I find no significant difference in startup time between python and
mzscheme.
My preliminary results in this very important benchmark indicates that
python performs equally well to the two benchmarked Common Lisps:

200 bjorn at nex:~> time for ((i=0; i<100; i++)); do lisp -noinit -eval '(quit)'; done

real 0m2,24s
user 0m1,36s
sys 0m0,83s
201 bjorn at nex:~> time for ((i=0; i<100; i++)); do lisp -noinit -eval '(quit)'; done

real 0m2,24s
user 0m1,39s
sys 0m0,82s
202 bjorn at nex:~> time for ((i=0; i<100; i++)); do clisp -q -x '(quit)'; done

real 0m2,83s
user 0m1,74s
sys 0m1,03s
203 bjorn at nex:~> time for ((i=0; i<100; i++)); do clisp -q -x '(quit)'; done

real 0m2,79s
user 0m1,67s
sys 0m1,09s
204 bjorn at nex:~> time for ((i=0; i<100; i++)); do python -c exit; done

real 0m2,41s
user 0m1,85s
sys 0m0,52s
205 bjorn at nex:~> time for ((i=0; i<100; i++)); do python -c exit; done

real 0m2,41s
user 0m1,89s
sys 0m0,52s

</sarcasm>


Bj?rn
Jacek Generowicz
2003-10-15 16:04:46 UTC
Permalink
HOFs are not a special Lisp thing. Haskell does them much better,
for example... and so does Python.
the alpha and omega of HOFs is that functions are first class
objects that can be passed and returned.
How do you reconcile these two statements ?

[Hint: functions in Lisps are "first class objects that can be passed
and returned"; how does Python (or Haskell) do this "alpha and omega
of HOFs" "much better" ?]
David Eppstein
2003-10-16 18:43:06 UTC
Permalink
In article <wub512qq.fsf at ccs.neu.edu>, Joe Marshall <jrm at ccs.neu.edu>
Did it occur to you that people maybe use python not so much because they
are
retards but because it's vastly more effective than CL at the tasks they
currently need to perform? Should I send you a few hundred lines of my
python
code so that you can try translating them into CL?
Sounds like an interesting challenge...
For simple use of built-in libraries,
http://online.effbot.org/2003_08_01_archive.htm#troll
looks like a good test case.
--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
Terry Reedy
2003-10-16 14:47:32 UTC
Permalink
Ultimately, all these Python v Lisp arguments boil down to the
Agreed so far...
- Keep it simple
Simple can be powerful ;-)
- Guido knows what's best for me
Hogwash. I don't believe that, and neither does Guido.
- Provide as much power to the programmer as possible, out of the
box

Vroom, vroom, maybe I should take it out for a spin .... at the local
racetrack.
- I know what's best for me, and I want a language that allows me to
invest effort in making difficult things easy for me
This I agree with, and it is why I currently choose Python.

Presenting the choice as 'toady language' versus 'real man language'
is rather biased.

Terry J. Reedy
David Mertz
2003-10-17 03:10:52 UTC
Permalink
|>And you DO NOT NEED lambdas for HOFs!

bokr at oz.net (Bengt Richter) wrote previously:
|there could be ways that you need BOTH named and un-named functions.

Nope, you do not NEED both. It can be convenient or expressive to have
both, but it is certainly not necessary for HOFs or any other
computational purpose. And I have yet to see an example where a
hypothetical loss of unnamed functions would *significantly* worsen
expressiveness.

|a function NEEDS a name in order to call itself recursively

Nope. That's the point of the Y combinator; you don't need a name to do
this (just first class anonymous functions). See, for example:

http://en2.wikipedia.org/wiki/Y_combinator

|OTOH, if you evaluate a def in a namespace where you don't know what
|all the names are, you have a collision risk when you choose a name.
|An un-named function eliminates that risk.

Sure, that can occassionally be useful in eliminating the small risk.
But so can 'grep'. There are always more names out there to use. This
particular convenience is VERY small.

|Why should the following kind of thing be arbitrarily restricted?
| >>> funlist = [
| ... (lambda value:
| ... lambda:'My value is %s'%value
| ... # imagine freedom to have full suites here
| ... )(y) for y in range(5)
| ... ]
| >>> for fun in funlist: print fun()

Obviously, you cannot spell 'funlist' quite that way in Python. But the
... def say_val(x=x): return 'My value is %s' % x
... return say_val
...
funlist = map(ValFactory, range(5))
I'm not sure the point here. My spelling happens to be Python's (or at
least one option in Python)... and it works fine without any lambdas.
If you want, you can even 'del' the name 'ValFactory' after the list is
created.

Yours, David...

--
_/_/_/ THIS MESSAGE WAS BROUGHT TO YOU BY: Postmodern Enterprises _/_/_/
_/_/ ~~~~~~~~~~~~~~~~~~~~[mertz at gnosis.cx]~~~~~~~~~~~~~~~~~~~~~ _/_/
_/_/ The opinions expressed here must be those of my employer... _/_/
_/_/_/_/_/_/_/_/_/_/ Surely you don't think that *I* believe them! _/_/
Kenny Tilton
2003-10-12 01:26:39 UTC
Permalink
...
The very 'feature' that was touted by Erann Gat as macros' killer
advantage in the WITH-CONDITION-MAINTAINED example he posted is the
crucial difference: functions (HO or not) and classes only group some
existing code and data; macros can generate new code based on examining,
and presumably to some level *understanding*, a LOT of very deep things
about the code arguments they're given.
Stop, your scaring me. You mean to say there are macros out there whose
output/behavior I cannot predict? And I am using them in a context where
I need to know what the behavior will be? What is wrong with me? And
what sort of non-deterministic macros are these, that go out and make
their own conclusions about what I meant in some way not documeted?
Let's start with that WITH-CONDITION-MAINTAINED example of Gat. Remember
it?
No, and Google is not as great as we think it is. :( I did after
extraordinary effort (on this my second try) find the original, but that
was just an application of the macro, not its innards, and I did not get
enough from his description to make out what it was all about. Worse, I
could not find your follow-up objections. I had stopped following this
thread to get some work done (and because I think the horse is dead).

All I know is that you are trying to round up a lynch mob to string up
WITH-MAINTAINED-CONDITION, and thet Lisp the language is doomed to
eternal damnation if we on c.l.l. do not denounce it. :)

No, seriously, what is your problem? That the macro would wlak the code
of the condition to generate a demon that would not only test the
condition but also do things to maintain the condition, based on its
parsing of the code for the condition?

You got a problem with that? Anyway, this is good, I was going to say
this chit chat would be better if we had some actual macros to fight over.

[Apologies up front: I am guessing left and right at both the macro and
your objections. And ILC2003 starts tomorrow, so I may not get back to
you all for a while.]

kenny

ps. Don't forget to read Paul Grahams Chapter's 1 & 8 in On Lisp, from
now on I think it is pointless not to be debating what he said, vs what
we are saying. The whole book is almost dedicated to macros. From the
preface:

"The title [On Lisp] is intended to stress the importance of bottom-up
programming in Lisp. Instead of just writing your program in Lisp, you
can write your own language on Lisp, and write your program in that.

"It is possible to write programs bottom-up in any language, but Lisp is
the most natural vehicle for this style of programming. In Lisp,
bottom-up design is not a special technique reserved for unusually large
or difficult programs. Any substantial program will be written partly in
this style. Lisp was meant from the start to be an extensible language.
The language itself is mostly a collection of Lisp functions, no
different from the ones you define yourself. What?s more, Lisp functions
can be expressed as lists, which are Lisp data structures. This means
you can write Lisp functions which generate Lisp code.

"A good Lisp programmer must know how to take advantage of this
possibility. The usual way to do so is by defining a kind of operator
called a macro. Mastering macros is one of the most important steps in
moving from writing correct Lisp programs to writing beautiful ones.
Introductory Lisp books have room for no more than a quick overview of
macros: an explanation of what macros are,together with a few examples
which hint at the strange and wonderful things you can do with them.

"Those strange and wonderful things will receive special attention here.
One of the aims of this book is to collect in one place all that people
have till now had to learn from experience about macros."

Alex, have you read On Lisp?
--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
Alex Martelli
2003-10-10 17:33:04 UTC
Permalink
Kenny Tilton wrote:
...
But methinks a number of folks using Emacs Elisp and Autocad's embedded
Lisp are non-professionals.
Methinks there are a great many more people using the VBA
interface to AutoCAD than its Lisp interface. In fact, my friends
(ex-Autodesk) told me that's the case.
Sheesh, who hasn't been exposed to basic? From my generation, that is.
:) But no matter, the point is anyone can handled parens if they try for
more than an hour.
Yes, but will that make them most happy or productive? The Autocad
case appears to be relevant, though obviously only Autodesk knows
for sure. When I was working in the mechanical CAD field, I had
occasion to speak with many Autocad users -- typically mechanical
drafters, or mechanical or civil engineers, by training and main working
experience -- who HAD painfully (by their tales) learned to "handle
parens", because their work required them occasionally to write Autocad
macros and once upon a time Autolisp was the only practical way to do it --
BUT had jumped ship gleefully to the VBA interface, happily ditching
years of Autolisp experience, just as soon as they possibly could (or
earlier, i.e. when the VBA thingy was very new and still creaky in its
integration with the rest of Autocad -- they'd rather brave the bugs
of the creaky new VBA thingy than stay with the Autolisp devil they
knew). I don't know if syntax was the main determinant. I do know
that quite a few of those people had NOT had any previous exposure to
any kind of Basic -- we're talking about mechanics-junkies, more likely
to spend their spare time hot-rodding their cars at home (Bologna is,
after all, about 20 Km from Ferrari, 20 Km on the other side from
Minardi, while the Ducati motorcycle factory is right here in town,
etc -- *serious* mechanics-freaks country!), rather than playing with
the early home computers, or program for fun.

So, I think Autocad does prove that non-professional programmers
(mechanical designers needing to get their designs done faster) CAN
learn to handle lisp if no alternatives are available -- and also
that they'd rather not do so, if any alternatives are offered. (I
don't know how good a lisp Autolisp is, anyway -- so, as I mentioned,
there may well be NON-syntactical reasons for those guys' dislike
of it despite years of necessarily using it as the only tool with
which they could get their real job done -- but I have no data that
could lead me to rule out syntax as a factor, at least for users
who were OCCASIONAL users anyway, as programming never was their
REAL, MAIN job, just means to an end).
You (Alex?) also worry about groups of programmers and whether what is
good for the gurus will be good for the lesser lights.
If you ever hear me call anyone who is not an expert programmer
a "lesser light" then I give you -- or anyone else here -- permission
to smack me cross-side the head.
Boy, you sure can read a lot into a casually chosen cliche. But can we
clear up once and for all whether these genius scientists are or are not
as good a programmer as you? I thought I heard Python being recommended
as better for non-professional programmers.
Dunno 'bout Andrew, but -- if the scientists (or their employers) are
paying Andrew for programming consultancy, training, and advice, would
it not seem likely that they consider that he's better at those tasks
than they are...? Otherwise why would they bother? Most likely the
scientists are better than him at _other_ intellectual pursuits -- be
it for reasons of nature, nurture, or whatever, need not interest us
here, but it IS a fact that some people are better at some tasks.
There is too much programming to be done, to let ONLY professional
programmers do it -- just like there's too much driving to be done, to
let only professional drivers do it -- still, the professionals can be
expected to be better at their tasks of specialistic expertise.


Alex
Erann Gat
2003-10-13 02:04:09 UTC
Permalink
In article <ue0ib.267972$R32.8718052 at news2.tin.it>, Alex Martelli
Let's start with that WITH-CONDITION-MAINTAINED example of Gat. Remember
it? OK, now, since you don't appear to think it was an idiotic example,
then SHOW me how it takes the code for the condition it is to maintain and
the (obviously very complicated: starting a reactor, operating the reactor,
stopping the reactor -- these three primitives in this sequence) program
over which it is to maintain it, and how does it modify that code to ensure
this purpose. Surely, given its perfectly general name, that macro does not
contain, in itself, any model of the reactor; so it must somehow infer it
(guess it?) from the innards of the code it's analyzing and modifying.
It is not necessary to exhibit a theory of how WITH-CONDITION-MAINTAINED
actually works to understand that if one had such a theory one can package
that theory for use more attractively as a macro than as a function. It
is not impossible to package up this functionality as a function, but it's
very awkward. Control constructs exist in programming languages for a
reason, despite the fact that none of them are really "necessary". For
example, we can dispense with IF statements and replace them with a purely
functional IF construct that takes closures as arguments. Or we can do
things the Java way and create a new Conditional object or some such
thing. But it's more convenient to write an IF statement.

The claim that macros are useful is nothing more and nothing less than the
claim that the set of useful control constructs is not closed. You can
believe that or not. To me it is self-evidently true, but I don't know
how to convince someone that it's true who doesn't already believe it.
It's rather like arguing over whether the Standard Model of Physics covers
all the useful cases. There's no way to know until someone stumbles
across a useful case that the Standard Model doesn't cover.
For example, the fact that Gat himself says that if what I want to write
are normal applications, macros are not for me: only for those who want
to push the boundaries of the possible are they worthwhile. Do you think
THAT is idiotic, or wise? Please explain either the reason of the drastic
disagreements in your camp, or why most of you do keep trying pushing
macros (and lisp in general) at those of us who are NOT particularly
interested in "living on the edge" and running big risks for their own sake,
accordingly to your answer to the preceding question, thanks.
I can't speak for anyone but myself of course, but IMO nothing worthwhile
is free of risks. I also think you overstate the magnitude of the risk.
You paint nightmare scenarios of people "changing the language"
willy-nilly in all sorts of divergent ways, but 1) in practice on a large
project people tend not to do that and 2) Lisp provides mechanisms for
isolating changes to the language and limiting the scope of their effect.
So while the possibility exists that someone will change the language in a
radical way, in practice this is not really a large risk. The risk of
memory corruption in C is vastly larger than the risk of "language
corruption" in Lisp, and most people seem to take that in stride.
...and there's another who has just answered in the EXACTLY opposite
way -- that OF COURSE macros can do more than HOF's. So, collectively
speaking, you guys don't even KNOW whether those macros you love so
much are really necessary to do other things than non-macro HOFs allow
(qualification inserted to try to divert the silly objection, already made
by others on your side, that macros _are_ functions), or just pretty things
up a little bit.
But all any high level language does is "pretty things up a bit". There's
nothing you can do in any language that can't be done in machine
language. "Prettying things up a bit" is the whole point. Denigrating
"prettying things up a bit" is like denigrating cars because you can get
from here to there just as well by walking, and all the car does is "speed
things up a bit".

E.
Hans Nowak
2003-10-12 00:42:57 UTC
Permalink
On Wed, 08 Oct 2003 18:28:36 +1300, "Greg Ewing (using news.cis.dfn.de)"
Hmm, if I recall correctly, in Latin the plural of 'virus' is 'virus'.
Actually, the last discussion of this that I saw (can't remember where)
came to the conclusion that the word 'virus' didn't *have* a plural
in Latin at all, because its original meaning didn't refer to something
countable.
'virus' (slime, poison, venom) is a 2nd declension neuter noun and
technically does have a plural 'viri'.
Doesn't it belong to the group that includes 'fructus'? Of course this has
nothing to do with the plural used in English, but still... :-)

This page, which has a lot of info on this issue, seems to think so:

http://www.perl.com/language/misc/virus.html
--
Hans (hans at zephyrfalcon.org)
http://zephyrfalcon.org/
David Rush
2003-10-06 20:49:08 UTC
Permalink
Guido's generally adamant stance for simplicity has been the
key determinant in the evolution of Python.
Simplicity is good. I'm just finding it harder to believe that Guido's
perception of simplicity is accurate.
Anybody who doesn't value simplicity and uniformity is quite
unlikely to be comfortable with Python
I would say that one of the reasons why I program in Scheme is *because* I
value simplicity and uniformity. The way that Python has been described in
this discussion make me think that I would really
*hate* Python for it's unecessary complications if I went back to it.
And I have spent years admiring Python from afar. The only reason I
didn't adopt it years ago was that it was lagging behind the releases
of Tk which I needed for my cross-platform aspirations. At the time, I
actually enjoyed programming in Python as a cheaper form of Smalltalk
(literally, Unix Smalltalk environments were going for $4000/seat).

Probably the most I can say now is that I think that Python's syntax is
unecessarily reviled (and there are a *lot* of people who think that
Python's syntax is *horrible* - I am not one of them mind you), in
much the same way that s-expressions are a stumbling block for programmers
from infix-punctuation language communities.

david rush
--
(\x.(x x) \x.(x x)) -> (s i i (s i i))
-- aki helin (on comp.lang.scheme)
Alex Martelli
2003-10-10 09:12:12 UTC
Permalink
Pick the one Common Lisp implementation that provides the stuff you
need. If no Common Lisp implementation provides all the stuff you
need, write your own libraries or pick a different language. It's as
simple as that.
Coming from a C/C++ background, I'm surprised by this attitude. Is
portability of code across different language implementations not a
priority for LISP programmers?
Libraries distributed as binaries are not portable across different C++
implementations on the same machine (as a rule).
This isn't true anymore (IE for newer compilers).
Wow, that IS great news! Does it apply to 32-bit Intel-oid machines (the
most widespread architecture) and the newest releases of MS VC++ (7.1)
and gcc, the most widespread compilers for it? I can't find any docs on what
switches or whatever I should give the two compilers to get seamless interop.

Specifically, the standard Python on Windows has long been built with MSVC++
and this has given problems to C-coded extension writers who don't own that
product -- it IS possible to use other compilers to build the extensions, but
only with much pain and some limitations (e.g on FILE* arguments). If this
has now gone away there would be much rejoicing -- with proper docs on the
Python side of things and/or use of whatever switches are needed to enable
this, if any, when we do the standard Python build on Windows.
Mangling, exception handling, etc, is all covered by the ABI.
IBM's XLC 6.0 for OSX also follows this C++ ABI, and is thus compatible
with G++ 3.x on OSX.
I'm not very familiar with Python on the Mac but I think it uses another
commercial compiler (perhaps Metrowerks?), so I suspect the same question
may apply here. It's not as crucial on other architectures where Python is
more normally built with free compilers, but it sure WOULD still be nice to
think of possible use of costly commercial compilers with hypothetically
great optimizations for the distribution of some "hotspot" object files, if
that sped the interpreter up without giving any interoperability problems.


Alex
Vis Mike
2003-10-17 19:48:47 UTC
Permalink
"Luke Gorrie" <luke at bluetail.com> wrote in message
Anonymous functions *can* be more clear than any name. Either because
they are short and simple, because it is hard to come up with a good
name, and/or becuase they are ambigous.
Say I want to attach an index to elements of a list. I could write
integers = [1..]
attach_index ls = zip integers ls
or just
attach_index ls = zip [1..] ls
If we're arguing to eliminate names that don't say very much, then
attach_index = zip [1..]
I think dynamic scoping within blocks really trims down on duplication and
make the code easier to read. For example:

employees sort: [ | a b | a date < b date ]

A lot of typing for a simple concept:

employees sort: [ < data ]

I'm not against too much typing to be clear, but rather too much typping
that makes the concept unclear.

-- Mike
Whether you want to give an explicit name to the list of integers is
not given. If the indexed list is local, it is better to use the
definition directly; I don't want to look up the definition of
integers (perhaps in a different module) to check whether it is [1..]
or [0..].
And for the exact same reason you might like to just write "zip [1..]"
instead of using a separate "attach_index" function.
Cheers,
Luke
Raffael Cavallaro
2003-10-16 02:50:28 UTC
Permalink
In article <9d140b81.0310151301.4b811dfa at posting.google.com>,
The reason people are attacking your posts is because the above has
NOTHING to do with anonymous functions. This advice should be
followed independent of anonymous functions.
You're misreading those who have argued against me. They seem to think
that this advice should _not_ be followed in the case of anonymous
functions. I.e., the anonymous function camp seem to think that
anonymous functions are such a wonderfully expressive tool that they are
more clear than _actual desriptive function names_.

I agree with you; this advice should be followed, period (well, it is
_my_ advice, after all). But advocates of a particular functional style
think it is perfectly alright to use the same unnamed functional idiom
over and over throughout source code, because functional abstractions
are so wonderfully expressive. They think it is just fine to expose the
implementation details in code locations where it is completely
unnecessary to know the implementation specifics.

How else can this be construed?

In article <pan.2003.10.14.23.19.11.733879 at knm.org.pl>,
A name is an extra level of indirection. You must follow it to be
100% sure what the function means, or to understand what does it really
mean that it does what it's named after. The code also gets longer - not
only more verbose but the structure of the code gets more complex with
more interdependent parts. When you have lots of short functions, it's
harder to find them. There are many names to invent for the writer and
many names to rememner for a reader. Function headers are administrative
stuff, it's harder to find real code among abstractions being introduced
and used.
In other words, don't use names _at all_ if you can avoid them. Just
long, rambling tracts of code filled with anonymous functions. After
all, you'd just have to go look at the named function bodes anyway, just
to be _sure_ they did what they said they were doing. (I wonder if he
disassembles OS calls too, you know, just to be sure).

Really I personally think they are often just enamored of their own
functional l33tness, but you'll always have a hard time convincing
programmers that their unstated preference is to write something so
dense that only they and others equally gifted in code decipherment can
grasp it. Many programmers take it badly when you tell them that their
job should be much more about communicating intent to other human beings
than about being extremely clever. As a result of this clever agenda, an
unmaintainably large proportion of the code that has ever been written
is way too clever for its own good.
Raffael Cavallaro
2003-10-15 00:41:28 UTC
Permalink
In article <pan.2003.10.14.23.19.11.733879 at knm.org.pl>,
Sometimes a function is so simple that its body is more clear than any
name. A name is an extra level of indirection. You must follow it to be
100% sure what the function means, or to understand what does it really
mean that it does what it's named after.
Your argument is based on the assumption that whenever people express
_what_ a function does, they do so badly, with an inappropriate name.

We should choose our mode of expression based on how things work when
used correctly, not based on what might happen when used foolishly. We
don't write novels based on how they might be misread by the
semi-litterate.

Anonymous functions add no clarity except to our understaning of
_implementation_, i.e., _how_ not _what_. Higher level abstractions
should express _what_. Implementation details should remain separate,
both for clarity of exposition, and for maintanence and change of
implementation.
The code also gets longer
No, it gets shorter, because you don't repeat your use of the same
abstraction over and over. You define it once, then reference it by name
everywhere you use it.
- not
only more verbose but the structure of the code gets more complex with
more interdependent parts.
No, there are _fewer_ interdependent parts, because the parts that
correspond to the anonymous function bodies are _completely gone_ from
the main exposition of what is happening. These formerly anonymous
function bodies are now elswhere, where they will only be consulted when
it is necessary to modify them.

You seem to take the view that client code can't trust the interfaces it
uses, that you have to see how things are implemented to make sure they
do what they represent to do.

This is a very counterproductive attitude. Code should provide high
level abstractions, and clients of this code should be able to tell what
it does just by looking at the interface, and maybe a line of
documentation. It shouldn't be necessary to understand the internals of
code one uses just to use it. And it certainly isn't necessary to
include the internals of code one uses where one is using it (i.e.,
anonymous functions). That's what named functions and macros are for.
Inlining should be done by compilers, not programmers.

Anonymous functions are a form of unnecessary information overload. If I
don't need to see how something works right here, in this particular
context, then don't put its implementation here. Just refer to it by
name.
When you have lots of short functions, it's
harder to find them. There are many names to invent for the writer and
many names to rememner for a reader.
Which is why names should be descriptive. Then, there's little to
remember. I don't need to remember what add-offset does, nor look up
it's definition, to understand its use in some client code. Anonymous
functions are sometimes used as a crutch by those who can't be bothered
to take the time to attend to the social and communicative aspects of
programming. Ditto for overly long function bodies. How to express
intent to human readers is just as important as getting the compiler to
do what you want. These same programmers seem enamored of
crack-smokingly short and cryptic identifier and function names, as if
typing 10 characters instead of 3 is the real bottleneck in modern
software development. (Don't these people know how to touch type?)
Function headers are administrative
stuff, it's harder to find real code among abstractions being introduced
and used.
Seemingly to you, the only "real code" is low level implementation. In
any non-trivial software, however, the "real code" is the interplay of
high level abstractions. At each level, we craft a _what_ from some less
abstract _how_, and the _what_ we have just defined, is, in turn used as
part of the _how_ for an even higher level of abstraction or
functionality.
Why do you insist on naming *functions*? You could equally well say that
every list should be named, so you would see its purpose rather than its
contents.
I think this concept is called variable bindings ;^)
Perhaps every number should be named, so you can see what it
represents rather than its value.
Actually, they are _already_ named. The numerals we use _are_ names, not
numbers themselves. I'm surprised you aren't advocating the use of
Church Numerals for all numerical calculation.
You could say that each statement of
a compound statement should be moved to a separate function, so you can
see what it does by its name, not how it does it by its contents. It's
all equally absurd.
In the Smalltalk community the rule of thumb is that if a method body
gets to be more than a few lines, you've failed to break it down into
smaller abstractions (i.e., methods).
A program should balance named and unnamed objects. Both are useful,
there is a continuum between cases where one or the other is more clear
and it's subjective in border cases, but there is place for unnamed
functions - they are not that special. Most high level languages have
anonymous functions for a reason.
Yes, but they should live inside the bodies of named functions. Not lie
exposed in the middle of higher level abstractions. Please also see my
reply to Joe Marshall/Prunesquallor a few posts up in this thread.
Raffael Cavallaro
2003-10-17 12:03:49 UTC
Permalink
In article <3f8e1312 at news.sentex.net>,
map is an abstraction that
specifies that you want to apply a certain operation to each element
of a collection, with adding the offset being the desired operation.
Sorry to reply so late - I didn't see this post.

In the context in which the offsets are added, it isn't necessary to
know that the offsets are added using map, as opposed, for example, to
an interative construct, such as loop. Since we don't need to know _how_
the adding of offsets is done at every location in the source where we
might want to add an offset to the elements of a list or vector, we
should _name_ this bit of functionality. This is an even bigger win
should we ever decide to switch from map to loop, or do, or dolist, etc.
Then we write a single change, not one for every time we use this bit of
functionality.
Brian McNamara!
2003-10-15 02:52:41 UTC
Permalink
Post by Raffael Cavallaro
Sometimes a function is so simple that its body is more clear than any
name. A name is an extra level of indirection. You must follow it to be
100% sure what the function means, or to understand what does it really
mean that it does what it's named after.
Your argument is based on the assumption that whenever people express
_what_ a function does, they do so badly, with an inappropriate name.
...
Post by Raffael Cavallaro
Anonymous functions add no clarity except to our understaning of
_implementation_, i.e., _how_ not _what_. Higher level abstractions
should express _what_. Implementation details should remain separate,
both for clarity of exposition, and for maintanence and change of
implementation.
I am in total agreement with Marcin. What you (Raffael) say here
sounds simply like dogma, rather than practical advice for constructing
software.

As a short practical example of what I'm saying, consider code like

-- I am translating this from another language; apologies if I have
-- screwed up any of the Haskell details
nat :: Parser Int
nat = do {c <- digit; return charToInt c}
`chainl1` (return \x y -> 10*x+y)

The code here parses natural numbers; it does do by parsing individual
digit characters, translating the digit characters into the
corresponding integers, and then combining the digit-integers into the
overall number.

If I understand you correctly, you would insist I write something like

nat :: Parser Int
nat = do {c <- digit; return charToInt c}
`chainl1` combineDigits
where combineDigits = return \x y -> 10*x+y

In my opinion, the code here is worse than the original.
"combineDigits" (or whatever we might choose to name it) is a one-time
function. It is not going to be reused (it's local to "nat"; moving it
to a more-top-level scope would just cause more problems finding a good
name for it).

Introducing a name for this function does nothing more than clutter the
code. The name provides no extra understanding. Of course the
function is "combining digits"! If the programmer is using chainl1, he
already knows that

chainl1 :: Parser a -> Parser (a->a->a) -> Parser a
-- parses a series of items separated by left-associative operators
-- which are used to combine those items

and he can infer "combining digits" just by seeing the call to chainl1
and inspecting its left argument.
Post by Raffael Cavallaro
Anonymous functions are a form of unnecessary information overload. If I
don't need to see how something works right here, in this particular
context, then don't put its implementation here. Just refer to it by
name.
There are more functional abstractions than there are reasonable names.

Forcing programmers to name every abstraction they'll ever encounter is
tantamount to forcing them to memorizing a huge vocabulary of names for
these abstractions. Perhaps you can remember the precise meanings of
ten-thousand function names off the top of your head. I, on the other
hand, can probably recall only a hundred or so. Thus, I must write,
e.g.

zipWith (\x y -> (x+y,x-y)) list1 list2

rather than

zipWith makePairOfSumAndDifference list1 list2

By only naming the most reusable abstractions (and, ideally, selecting
a set which are mostly orthogonal to one another), we provide a "core
vocabulary" which captures the essential basis of the domain. Lambda
then takes us the rest of the way. In my opinion, a core vocabulary of
named functions plus lambda is better than a separate name for every
abstraction. In natural language, such a scheme would be considered
double-plus-un-good, but in programming, I think it lends itself to the
simplest and most precise specifications.


I agree with your one of your overall theses, which is that we should
focus on the "what" rather than the "how". Where our opinions diverge,
however, is that I think sometimes the best way to communicate an
abstraction is to show the "how" inline, rather than creating a new
name in an attempt to capture the "what".
--
Brian M. McNamara lorgon at acm.org : I am a parsing fool!
** Reduce - Reuse - Recycle ** : (Where's my medication? ;) )
Jon S. Anthony
2003-10-13 16:48:18 UTC
Permalink
At the moment the only thing I am willing to denounce as idiotic are
your clueless rants.
Excellent! I interpret the old saying "you can judge a man by the quality
of his enemies" differently than most do: I'm _overjoyed_ that my enemies
are the scum of the earth, and you, sir [to use the word loosely], look as
if you're fully qualified to join that self-selected company.
Whatever.

/Jon
Andrew Dalke
2003-10-11 17:27:02 UTC
Permalink
What do I want of the OS running my firewall? Security, security,
security.
It's nice if it can run on very scant resources, offers solid and usable
packet filtering, and has good, secure device drivers available for the
kind of devices I may want in a computer dedicated to firewalling -- all
sorts of ethernet cards, wifi thingies, pppoe and plain old ppp on ISDN
for
emergency fallback if cable service goes down, UPS boxes, a serial console
of course, perhaps various storage devices, and that's about it.
I see no reason why I should use anything but OpenBSD for that.
I'm running a Linksys box as my primary firewall. I like the feeling of
security that the OS is in firmware and can't be updated (I hope) except
through the USB connector. I like that the box is portable (I've taken
it to a couple of conferences), low power (I leave it on all the time), and
quiet.

I do have a network -> dialup box as well when I needed to do dialup
to one of my clients, but I've not needed that feature as a backup
to my DSL in over a year.
(DHCP, DNS, ntp, Squid for proxying, ...).
It does DHCP but not the rest. Would be nice, but I would prefer
those features to be on this side of my firewall. Yes, I know about
OpenBSD's "only one remote hold in the default install, in more
than 7 years" claim to fame.

Andrew
dalke at dalkescientific.com

But is it a sense of security or real security? Hmm.... :)
Alex Martelli
2003-10-12 18:34:39 UTC
Permalink
Pascal Costanza wrote:
...
Does Python allow local function definitions?
...
Can they shadow predefined functions?
...
Yes, named objects, including functions can (locally) shadow
(override) builtins. It is considered a bad habit/practice unless
done intentionally with a functional reason.
Well, this proves that Python has a language feature that is as
dangerous as many people seem to think macros are.
Indeed, a chorus of "don't do that" is the typical comment each
and every time a newbie falls into that particular mis-use. Currently,
the --shadow option of PyChecker only warns about shadowing of
_variables_, not shadowing of _functions_, but there's really no
reason why it shouldn't warn about both. Logilab's pylint does
diagnose "redefining built-in" with a warning (I think they mean
_shadowing_, not actually _redefining_, but this may be an issue
of preferred usage of terms).

"Nailing down" built-ins (at first with a built-in warning for overriding
them, later in stronger ways -- slowly and gradually, like always, to
maintain backwards compatibility and allow slow, gradual migration of the
large existing codebase) is under active consideration for the next version
of Python, expected (roughly -- no firm plans yet) in early 2005.

So, yes, Python is not perfect today (or else, we wouldn't be planning a
2.4 release...:-). While it never went out of its way to give the user "as
much rope as needed to shoot oneself in the foot", neither did it ever
spend enormous energy in trying to help the user avoid many possible errors
and dubious usage. Such tools as PyChecker and pylint are a start, and
some of their functionality should eventually be folded back into the
core, just as tabnanny's was in the past with the -t switch. I don't think
the fundamental Python will ever nag you for missing comments or
docstrings, too-short names, etc, the way pylint does by default (at
least, I sure hope not...!-), but there's quite a bit I _would_ like to have
it do in terms of warnings and, eventually, error messages for
"feechurs" that only exist because it was once simple to allow than
to forbid them, not by a deliberate design decision to have them there.

Note that SOME built-ins exist SPECIFICALLY for the purpose of
letting you override them. Consider, for example, __import__ -- this
built-in function just exposes the inner mechanics of the import
statement (and friends) to let you get modules from some other
place (e.g., when your program must run off a relational database
rather than off a filesystem). In other word, it's a rudimentary hook
in a "Template Method" design pattern (it's also occasionally handy
to let you import a module whose name is in a string, without
going to the bother of an 'exec', so it will surely stay for that purpose
even though we now have a shiny brand-new architecture for
import hooks -- but that's another story). Having a single hook of
global effect has all the usual downsides, of course (which is exactly
why we DO have that new architecture;-): two or more complicated
packages doing import-hooks can't necessarily coexist within the
same Python application program (the only saving grace which let
us live with that simplistic hook for so many years is that importing
from strange places is typically a need of a certain deployment of
an overall application, not of a package -- still, such packages DO
exist, so the previous solution was far from perfect).

Anyway, back to your contention: I do not think that the fact that
the user can, within his functions, choose very debatable names,
such as those which shadow built-ins, is anywhere as powerful,
and therefore as dangerous, as macros. My own functions using
'sum' will get the built-in one even if yours do weird things with
that same name as a local variable of their own. The downsides
of shadowing are essentially as follows...

a newbie posts some fragment of his code asking for guidance,
and among other things that fragment has
for i in range(length(thenumbers)):
total = total + thenumbers[i]
he will receive many suggestions on how to make it better,
including the ideal one:
total = sum(thenumbers, total)
But then he tries it out and reports "it breaks" (newbies rarely
are clueful enough to just copy and paste error messages). And
we all waste lots of time finding out that this is because... the
hapless newbie had named HIS OWN FUNCTION 'sum', so
this was causing runaway recursion. Having met similar issues
over and over, one starts to warn newbies against shadowing
and get sympathetic with the idea of forbidding it:-).

That doesn't really compare to an extra feature in the language
that is deliberately designed to let reasonably-clueful users do
their thing, isn't deprecated nor warned against by anybody at
all (with a few isolated voices speaking about "abuse" of macros
in this thread, but still with an appreciation for macros when
_well_ used), and is MEANT to do what newbies _accidentally_
do with shadowing & much more besides;-).


Alex
Alex Martelli
2003-10-12 21:59:26 UTC
Permalink
Pascal Costanza wrote:
...
Post by Alex Martelli
Well, this proves that Python has a language feature that is as
dangerous as many people seem to think macros are.
...
Post by Alex Martelli
Indeed, a chorus of "don't do that" is the typical comment each
and every time a newbie falls into that particular mis-use. Currently,
...
Post by Alex Martelli
large existing codebase) is under active consideration for the next
version of Python, expected (roughly -- no firm plans yet) in early 2005.
OK, I understand that the Python mindset is really _a lot_ different
than the Lisp mindset in this regard.
As in, no lisper will ever admit that a currently existing feature is
considered a misfeature?-)
Ah, you want something like final methods in Java, or better probably
final implicitly as the default and means to make select methods
non-final, right?
Not really, the issue I was discussing was specifically with importing.

Normally, an import statement "looks" for a module [a] among those
already loaded, [b] among the ones built-in to the runtime, [c] on
the filesystem (files in directories listed in sys.path). "import hooks"
can be used to let you get modules from other places yet (a database,
a server over the network, an encrypted version, ...). The new architecture
I mentioned lets many import hooks coexist and cooperate, while the
old single-hook architecture made that MUCH more difficult, that's all.

"final implicitly as the default and means to make select methods
non-final" is roughly what C++ has -- the "means" being the "virtual"
attribute of methods. Experience proves that's not what we want.
Rather, builtin (free, aka toplevel) _names_ should be locked down
just as today names of _attributes_ of builtin types are mostly
locked down (with specific, deliberate exceptions, yes). But I think
I'm in a minority in wanting similar mechanisms for non-built-ins,
akin to the 'freeze' mechanism of Ruby (and I'm dismayed by reading
that experienced Rubystas say that freeze LOOKS like a cool idea
but in practice it's almost never useful -- they have the relevant
experience, I don't, so I have to respect their evaluation).
What makes you think that macros have farther reaching effects in this
regard than functions? If I call a method and pass it a function object,
I also don't know what the method will do with it.
Of course not -- but it *cannot possibly* do what Gat's example of macros,
WITH-MAINTAINED-CONDITION, is _claimed_ to do... "reason" about the
condition it's meant to maintain (in his example a constraint on a variable
named temperature), about the code over which it is to be maintained
(three functions, or macros, that start, run, and stop the reactor),
presumably infer from that code a model of how a reactor _works_, and
rewrite the control code accordingly to ensure the condition _is_ in fact
being maintained. A callable passed as a parameter is _atomic_ -- you
call it zero or more times with arguments, and/or you store it somewhere
for later calling, *THAT'S IT*. This is _trivially simple_ to document and
reason about, compared to something that has the potential to dissect
and alter the code it's passed to generate completely new one, most
particularly when there are also implicit models of the physical world being
inferred and reasoned about. Given that I've seen nobody say, for days!,
that Gat's example was idiotic, as I had first I thought it might be, and
on the contrary I've seen many endorse it, I use it now as the simplest
way to show why macros are obviously claimed by their proponents to
be _scarily_ more powerful than functions. (and if a few voices out of
the many from the macro-lovers camp should suddely appear to claim
that the example was in fact idiotic, while most others keep concurring
with it, that will scale down "their proponents" to "most of their
proponents", not a major difference after all).
Overriding methods can also be problematic when they break contracts.
That typically only means an exception ends up being raised when
the method is used "inappropriately" - i.e. in ways depending on the
contract the override violates. The only issue is ensuring that the
diagnostics of the error are clear and complete (and giving clear and
complete error diagnostics is often not trivial, but that is common to
just about _any_ classes of errors that programmers do make).
(Are you also considering to add DBC to Python? I would expect that by
now given your reply above.)
Several different implementations of DBC for Python are around, just
like several different architectures for interfaces (or, as I hope,
Haskell-like typeclasses, a more powerful concept). [Note that the
lack of macros stops nobody from playing around with concepts they
would like to see in Python: they just don't get to make new syntax
to go with them, and, thus, to fragment the language thereby:-)].

Guido has already declared that ONE concept of interfaces (or
typeclasses, or protocols, etc) _will_ eventually get into Python -- but
_which one_, it's far too early to tell. I would be surprised if whichever
version does make it into Python doesn't let you express contracts.
A contract violation will doubtlessly only mean a clear and early error
diagnostic, surely a good thing but not any real change in the
power of the language.
Can you give an example for the presumably dangerous things macros
supposedly can do that you have in mind?
I have given this repeatedly: they can (and in fact have) tempt programmers
using a language which offers macros (various versions of lisp) to, e.g.,
"embed a domain specific language" right into the general purpose language.
I.e., exactly the use which is claimed to be the ADVANTAGE of macros. I
have seen computer scientists with modest grasp of integrated circuit design
embed half-baked hardware-description languages (_at least_ one different
incompatible such sublanguage per lab) right into the general-purpose
language, and tout it at conferences as the holy grail -- while competitors
were designing declarative languages intended specifically for the purpose
of describing hardware, with syntax and specifically limited semantics that
seemed to be designed in concert with the hardware guys who'd later be
USING the gd thing (and were NOT particularly interested in programming
except in as much it made designing hardware faster and cheaper). The
effort of parsing those special-purpose language was of course trivial (even
at the time -- a quarter of a century ago -- yacc and flex WERE already
around...!), there was no language/metalanguage confusion, specialists in
the domain actually did a large part of the domain-specific language design
(without needing macro smarts for the purpose) and ended up eating our
lunch (well, except that I jumped ship before then...;-).

Without macros, when you see you want to design a special-purpose
language you are motivated to put it OUTSIDE your primary language,
and design it WITH its intended users, FOR its intended purposes, which
may well have nothing at all to do with programming. You parse it with a
parser (trivial these days, trivial a quarter of a century ago), and off you
go. With macros, you're encouraged to do all the wrong things -- or, to
be more precise, encouraged to do just the things I saw causing the
many costly failures (one or more per lab, thanks to the divergence:-)
back in that my early formative experience.

I have no problem with macros _in a special-purpose language_ where
they won't tempt you to embed what you _should_ be "out-bedding",
so to speak -- if the problem of giving clear diagnostics of errors can be
mastered, denoting that some functions are to be called at compile
time to produce code in the SPL has no conceptual, nor, I think,
particular "sociological" problem. It's only an issue of weighing their
costs and usefulness -- does the SPL embody other ways to remove
duplication and encourage refactoring thereby, are there overlap
among various such ways, etc, etc. E.g., a purely declarative SPL,
with the purpose of describing some intricate structure, may have no
'functions' and thus no other real way to remove duplication than
macros (well, it depends on whether there may be other domain
specific abstractions that would be more useful than mere textual
expansions, of course -- e.g. inheritance/specialization, composition
of parts, and the like -- but they need not be optimal to capture some
inevitable "quasi-accidental duplications of SPL code" where a macro
might well be).


Alex
Henrik Motakef
2003-10-12 23:20:41 UTC
Permalink
Post by Alex Martelli
OK, I understand that the Python mindset is really _a lot_ different
than the Lisp mindset in this regard.
As in, no lisper will ever admit that a currently existing feature is
considered a misfeature?-)
You might want to search google groups for threads about "logical
pathnames" in cll :-)
Post by Alex Martelli
Guido has already declared that ONE concept of interfaces (or
typeclasses, or protocols, etc) _will_ eventually get into Python -- but
_which one_, it's far too early to tell.
A propos interfaces in Python: The way they were done in earlier Zope
(with "magic" docstrings IIRC) was one of the things that led me to
believe language extensibilit was a must, together with the phletora
of SPLs the Java community came up with, either in comments (like
JavaDoc and XDoclet) or ad-hoc XML "configuration" files that grow and
grow until they are at least turing-complete at some point. (blech)

People /will/ extend the base language if it's not powerfull enough
for everything they want to do, macros or not. You can either give
them a powerfull, documented and portable standart way to do so, or
ignore it, hoping that the benevolent dictator will someday change the
core language in a way that blesses one of the extensions (most likely
a polished variant of an existing one) the "obvious", official one.

It is the difference between implementing a proper type system and
extending lint to check for consistent use of the hungarian notation
at the end of the day.
Erann Gat
2003-10-13 02:14:56 UTC
Permalink
In article <29kib.206536$hE5.6945256 at news1.tin.it>, Alex Martelli
Post by Alex Martelli
What makes you think that macros have farther reaching effects in this
regard than functions? If I call a method and pass it a function object,
I also don't know what the method will do with it.
Of course not -- but it *cannot possibly* do what Gat's example of macros,
WITH-MAINTAINED-CONDITION, is _claimed_ to do... "reason" about the
condition it's meant to maintain (in his example a constraint on a variable
named temperature), about the code over which it is to be maintained
(three functions, or macros, that start, run, and stop the reactor),
presumably infer from that code a model of how a reactor _works_, and
rewrite the control code accordingly to ensure the condition _is_ in fact
being maintained. A callable passed as a parameter is _atomic_ -- you
call it zero or more times with arguments, and/or you store it somewhere
for later calling, *THAT'S IT*. This is _trivially simple_ to document and
reason about, compared to something that has the potential to dissect
and alter the code it's passed to generate completely new one, most
particularly when there are also implicit models of the physical world being
inferred and reasoned about. Given that I've seen nobody say, for days!,
that Gat's example was idiotic, as I had first I thought it might be, and
on the contrary I've seen many endorse it, I use it now as the simplest
way to show why macros are obviously claimed by their proponents to
be _scarily_ more powerful than functions.
Why "scarily"?

E.
Ng Pheng Siong
2003-10-08 05:47:18 UTC
Permalink
Well, I would say that kanji is badly designed, compared to latin
alphabet. The voyels are composed with consones (with diacritical
marks) and consones are written following four or five groups with
additional diacritical marks to distinguish within the groups. It's
more a phonetic code than a true alphabet.
Kanji are ideograms borrowed from Chinese. Kanji literally means "Han
character".

I think the diacritical marks you mention are pronunciation guides, much
like Hanyu Pinyin is a Mandarin pronunciation guide for Chinese.

In Hanyu Pinyin, Kanji (read as a Chinese word phrase) is rendered "han4
zi4".

In Korean, Kanji is pronounced Hanja.

Same two-character word phrase, different pronunciations.
--
Ng Pheng Siong <ngps at netmemetic.com>

http://firewall.rulemaker.net -+- Manage Your Firewall Rulebase Changes
http://sandbox.rulemaker.net/ngps -+- Open Source Python Crypto & SSL
Alexander Schmolck
2003-10-12 17:25:04 UTC
Permalink
The smartest people I know aren't programmers. What does
that say?
I think this is vital point. CL's inaccessibility is painted as a feature of
CL by many c.l.l denizens (keeps the unwashed masses out), but IMO the CL
community stunts and starves itself intellectually big time because CL is (I
strongly suspect) an *extremely* unattractive language for smart people
(unless they happen to be computer geeks).

Apart from the fact that this yields a positive feedback loop, I'd think that
even the smart computer geeks are likely to suffer from this incestuousness in
the midrun.

'as
David Eppstein
2003-10-14 15:33:02 UTC
Permalink
In article <tyf1xtf50a5.fsf at pcepsft001.cern.ch>,
However, please _do_ tell me if you hear of anyone implementing Python
in Lisp[*].
Having Python as a front-end to Lisp[*] (as it is now a front-end to
C, C++ and Java) would be very interesting indeed.
It is now also a front-end to Objective C, via the PyObjC project.
--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
Jon S. Anthony
2003-10-09 20:26:50 UTC
Permalink
Ahhh, so make the language easier for computers to understand and
harder for intelligent users to use? ;)
Spoken like a true Python supporter...

/Jon
Andrew Dalke
2003-10-09 19:52:47 UTC
Permalink
Note that I did not at all make reference to macros. Your statements
to date suggest that your answer to the first is "no."
That's not exactly my position, rather my position is that just about
anything can and will be abused in some way shape or fashion. It's a
simple fact of working in teams. However I would rather err on the side
of abstractability and re-usability than on the side of forced
restrictions.

You are correct. I misremembered "Tolton" as "Tilton" and confused
you with someone else. *blush*

My answer, btw, that the macro preprocessor in C is something
which is useful and too easily prone to misuse. Eg, my original
C book was "C for native speakers of Pascal" and included in
the first section a set of macros like

#define BEGIN {
#define END }

It's not possible to get rid of cpp for C because the language
is too weak, but it is something which takes hard experience to
learn when not to use.

As for a language feature which should never be used. Alex Martelli
gave an example of changing the default definition for == between
floats, which broke other packages, and my favorite is "OPTION
BASE 1" in BASIC or its equivalent in Perl and other langauges.
That is, on a per-program (or even per-module) basis, redefine
the 0 point offset for an array.

Andrew
dalke at dalkescientific.com
Alexander Schmolck
2003-10-07 20:25:53 UTC
Permalink
(I'm ignoring the followup-to because I don't read comp.lang.python)
Well, I supposed this thread has spiralled out of control already anyway:)
Indentation-based grouping introduces a context-sensitive element into
the grammar at a very fundamental level. Although conceptually a
block is indented relative to the containing block, the reality of the
situation is that the lines in the file are indented relative to the
left margin. So every line in a block doesn't encode just its depth
relative to the immediately surrounding context, but its absolute
depth relative to the global context.
I really don't understand why this is a problem, since its trivial to
transform python's 'globally context' dependent indentation block structure
markup into into C/Pascal-style delimiter pair block structure markup.

Significantly, AFAICT you can easily do this unambiguously and *locally*, for
example your editor can trivially perform this operation on cutting a piece of
python code and its inverse on pasting (so that you only cut-and-paste the
'local' indentation). Prima facie I don't see how you loose any fine control.
Additionally, each line encodes this information independently of the other
lines that logically belong with it, and we all know that when some data is
encoded in one place may be wrong, but it is never inconsistent.
Sorry, I don't understand this sentence, but maybe you mean that the potential
inconsitency between human and machine interpretation is a *feature* for Lisp,
C, Pascal etc!? If so I'm really puzzled.
There is yet one more problem. The various levels of indentation encode
different things: the first level might indicate that it is part of a
function definition, the second that it is part of a FOR loop, etc. So on
any line, the leading whitespace may indicate all sorts of context-relevant
information.
I don't understand why this is any different to e.g. ')))))' in Lisp. The
closing ')' for DEFUN just looks the same as that for IF.
Yet the visual representation is not only identical between all of these, it
cannot even be displayed.
I don't understand what you mean. Could you maybe give a concrete example of
the information that can't be displayed? AFAICT you can have 'sexp'-movement,
markup and highlighting commands all the same with whitespace delimited block
structure.
Is this worse than C, Pascal, etc.? I don't know.
I'm pretty near certain it is better: In Pascal, C etc. by and large block
structure delimitation is regulated in such a way that what has positive
information content for the human reader/programmer (indentation) has zero to
negative information content for the compiler and vice versa. This is a
remarkably bad design (and apart from cognitive overhead obviously also causes
errors).

Python removes this significant problem, at as far as I'm aware no real cost
and plenty of additional gain (less visual clutter, no waste of delimiter
characters ('{','}') or introduction of keywords that will be sorely missed as
user-definable names ('begin', 'end')).

In Lisp the situtation isn't quite as bad, because although most of the parens
are of course mere noise to a human reader, not all of them are and because of
lisp's simple but malleable syntactic structure a straighforward replacement
of parens with indendation would obviously result in unreadable code
(fragmented over countless lines and mostly in past the 80th column :).

So unlike C and Pascal where a fix would be relatively easy, you would need
some more complicated scheme in the case of Lisp and I'm not at all sure it
would be worth the hassle (especiallly given that efforts in other areas would
likely yield much higher gains).

Still, I'm sure you're familiar with the following quote (with which I most
heartily agree):

"[P]rograms must be written for people to read, and only incidentally for
machines to execute."

People can't "read" '))))))))'.
Worse than Lisp, Forth, or Smalltalk? Yes.
Possibly, but certainly not due to the use of significant whitespace.


'as
Alexander Schmolck
2003-10-15 17:22:49 UTC
Permalink
I don't know why you categorize it as idiocy.
I, having no experience whatsoever with Python
'as
prunesquallor
2003-10-07 23:26:35 UTC
Permalink
Post by Alexander Schmolck
(I'm ignoring the followup-to because I don't read comp.lang.python)
Well, I supposed this thread has spiralled out of control already anyway:)
Indentation-based grouping introduces a context-sensitive element into
the grammar at a very fundamental level. Although conceptually a
block is indented relative to the containing block, the reality of the
situation is that the lines in the file are indented relative to the
left margin. So every line in a block doesn't encode just its depth
relative to the immediately surrounding context, but its absolute
depth relative to the global context.
I really don't understand why this is a problem, since its trivial to
transform python's 'globally context' dependent indentation block structure
markup into into C/Pascal-style delimiter pair block structure markup.
Of course it can. Any unambiguous grammar has a parse tree.
Post by Alexander Schmolck
Significantly, AFAICT you can easily do this unambiguously and *locally*, for
example your editor can trivially perform this operation on cutting a piece of
python code and its inverse on pasting (so that you only cut-and-paste the
'local' indentation). Prima facie I don't see how you loose any fine control.
Only if your cut boundaries are at the same lexical level. If you cut
across boundaries, it is no longer clear what should happen at the paste.

Also, it is frequently the case that you need to `tweak' the code after
you paste it.
Post by Alexander Schmolck
Additionally, each line encodes this information independently of the other
lines that logically belong with it, and we all know that when some data is
encoded in one place may be wrong, but it is never inconsistent.
Sorry, I don't understand this sentence, but maybe you mean that the potential
inconsitency between human and machine interpretation is a *feature* for Lisp,
C, Pascal etc!? If so I'm really puzzled.
You misunderstand me. In a python block, two expressions are
associated with each other if they are the same distance from the left
edge. This is isomorphic to having a nametag identifying the scope
of the line. Lines are associated with each other iff they have the
same nametag. Change one, and all must change.

If, instead, you use balanced delimiters, then a subexpression no
longer has to encode its position within the containing expression.

Let me demonstrate the isomorphism. A simple python expression:
(grrr.. I cut and paste it, but it lost its indentation between
the PDF file and Emacs. I hope I redo it right...)

def index(directory):
# like os.listdir, but traverses directory trees
stack = [directory]
files = []
while stack:
directory = stack.pop()
for file in os.listdir(directory):
fullname = os.path.join(directory, file)
files.append(fullname)
if os.path.isdir(fullname) and not os.path.islink(fullname):
stack.append(fullname)
return files

Now the reason we know that ` files.append(fullname)' and
` fullname = os.path.join(directory, file)' are part of the
same block is because they both begin with 12 spaces. The first
four spaces encode the fact that they belong to the same function,
the next four indicate that they belong in the while loop, and
the final four indicate that they belong in the for loop.
The ` return files', on the other hand, only has four spaces, so
it cannot be part of the while or for loop, but it is still part
of the function. I can represent this same information as a code:

t -def index(directory):
d - # like os.listdir, but traverses directory trees
d - stack = [directory]
d - files = []
d - while stack:
dw - directory = stack.pop()
dw - for file in os.listdir(directory):
dwf - fullname = os.path.join(directory, file)
dwf - files.append(fullname)
dwf - if os.path.isdir(fullname) and not os.path.islink(fullname):
dwfi- stack.append(fullname)
d - return files

The letter in front indicates what lexical group the line belongs to. This
is simply a different visual format for the leading spaces.

Now, suppose that I wish to protect the body of the while statement
within a conditional. Simply adding the conditional won't work:

d - while stack:
dw - if copacetic():
dw - directory = stack.pop()
dw - for file in os.listdir(directory):
dwf - fullname = os.path.join(directory, file)
dwf - files.append(fullname)
dwf - if os.path.isdir(fullname) and not os.path.islink(fullname):
dwfi- stack.append(fullname)

because the grouping information is replicated on each line, I have to
fix this information in the six different places it is encoded:

d - while stack:
dw - if copacetic():
dwi - directory = stack.pop()
dwi - for file in os.listdir(directory):
dwif - fullname = os.path.join(directory, file)
dwif - files.append(fullname)
dwif - if os.path.isdir(fullname) and not os.path.islink(fullname):
dwifi- stack.append(fullname)

The fact that the information is replicated, and that there is nothing
but programmer discipline keeping it consistent is a source of errors.
Post by Alexander Schmolck
There is yet one more problem. The various levels of indentation encode
different things: the first level might indicate that it is part of a
function definition, the second that it is part of a FOR loop, etc. So on
any line, the leading whitespace may indicate all sorts of context-relevant
information.
I don't understand why this is any different to e.g. ')))))' in Lisp. The
closing ')' for DEFUN just looks the same as that for IF.
That is because the parenthesis *only* encode the grouping information,
they do not do double duty and encode what they are grouping. The key
here is to realize that the words `DEFUN' and the `IF' themselves look
very different.
Post by Alexander Schmolck
Yet the visual representation is not only identical between all of these, it
cannot even be displayed.
I don't understand what you mean. Could you maybe give a concrete example of
the information that can't be displayed?
Still, I'm sure you're familiar with the following quote (with which I most
"[P]rograms must be written for people to read, and only incidentally for
machines to execute."
People can't "read" '))))))))'.
Funny, the people you just quoted would disagree with you about parenthesis.
I expect that they would disagree with you about whitespace as well.
prunesquallor
2003-10-11 05:59:06 UTC
Permalink
[narrowing down to c.l.l and c.l.p]
Post by prunesquallor
Post by Alexander Schmolck
Significantly, AFAICT you can easily do this unambiguously and *locally*,
for example your editor can trivially perform this operation on cutting a
piece of python code and its inverse on pasting (so that you only
cut-and-paste the 'local' indentation). Prima facie I don't see how you
loose any fine control.
Only if your cut boundaries are at the same lexical level. If you cut
across boundaries, it is no longer clear what should happen at the paste.
The only unclarity would arise from dedents of more than one level on the
corner of your cut region, I think.
Suppose I cut just one arm of a conditional. When I paste, it is
unclear whether I intend for the code after the paste to be part of
that arm, part of the else, or simply part of the same block.
Post by prunesquallor
The fact that the information is replicated, and that there is nothing
but programmer discipline keeping it consistent is a source of errors.
Sure there is. Your editor and immediate visual feedback (no need to remember
to reindent after making the semantic changes).
`immediate visual feedback' = programmer discipline
Laxness at this point is a source of errors.
Post by prunesquallor
Post by Alexander Schmolck
I don't understand why this is any different to e.g. ')))))' in Lisp. The
closing ')' for DEFUN just looks the same as that for IF.
That is because the parenthesis *only* encode the grouping information,
they do not do double duty and encode what they are grouping.
The key here is to realize that the words `DEFUN' and the `IF' themselves
look very different.
Well, I'm evidently still not getting it, because my reply would be "and so do
'def ...:' and 'if ...:' in python" (and you also can't tell whether the 8
spaces on the left margin come from a 'def' and and an enclosed 'if' or vice
versa).
I'm not sure I can explain any better, I *think* perhaps some of the other
Lispers might get what I'm saying here.
Post by prunesquallor
Post by Alexander Schmolck
Yet the visual representation is not only identical between all of these, it
cannot even be displayed.
I don't understand what you mean. Could you maybe give a concrete example of
the information that can't be displayed?
10 spaces (which BTW I counted in emacs in just the same way that I'd count a
similar number of parens) -- but what has counting random trailing whitespace
got to do with anything?
It is simply an illustration that there is no obvious glyph associated
with whitespace, and you wanted a concrete example of something that can't
be displayed.
Post by prunesquallor
Post by Alexander Schmolck
Still, I'm sure you're familiar with the following quote (with which I most
"[P]rograms must be written for people to read, and only incidentally for
machines to execute."
People can't "read" '))))))))'.
Funny, the people you just quoted would disagree with you about parenthesis.
I expect that they would disagree with you about whitespace as well.
Why, do they know someone with a paren-counting brain-implant? If so every
lisper should get one -- no more carpal tunnel syndrome from frantic C-M-\
pressing or alphabet wastage due to scheme-style 'aliasing' of '[...]' to
'(...)'.
Seriously, I really fail to see see what the source of disagreement with the
above statement could be.
I cannot read Abelson and Sussman's minds, but neither of them are
ignorant of the vast variety of computer languages in the world.
Nonetheless, given the opportunity to choose any of them for
exposition, they have chosen lisp. Sussman went so far as to
introduce lisp syntax into his book on classical mechanics.
Apparently he felt that not only *could* people read ')))))))', but
that it was often *clearer* than the traditional notation.
If I gave you a piece of lisp code jotted down on paper that (as those
hypothetical examples usually are) for some reason was of vital importance and
that on typing it in revealed a mismatch between the indentation and the
parenthesation in a key section of the code -- which interpretation would you
hedge your bets on in the absence of other indicators; the one suggested by
the indentation or the 9 trailing parens?
Obviously the indentation. But I'd notice the mismatch.

If I gave you a piece of python code jotted down on paper that (as these
hypothetical examples usually are) for some reason was of vital importance
but I accidentally misplaced the indentation -- how would you know?
Marco Antoniotti
2003-10-10 16:33:31 UTC
Permalink
Ok. At this point I feel the need to apoligize to everybody for my
rants and I promise I will do my best to end this thread.

I therefor utter the H-word and hopefully cause this thread to stop.

Cheers
--
marco
Paolo Amoroso
2003-10-13 17:03:18 UTC
Permalink
[following up to comp.lang.python and comp.lang.lisp]
necessarily yield optimal productivity in programming. What
language design trade-offs WILL yield such optimal productivity,
*DEPENDING* on the populations and tasks involved, is the crux
of this useless and wearisome debate (wearisome and useless, in
good part, because many, all it seems to me on the lisp side,
appear to refuse to admit that there ARE trade-offs and such
dependencies on tasks and populations).
I agree (Lisp side).
The only example of 'power' I've seen (besides the infamous
with-condition-maintained example) are such trifles as debugging-
output macros that may get compiled out when a global flag to
disable debugging is set -- exactly the kind of micro-optimization
If you are interested in advanced uses of Common Lisp macros, you may
have a look at Paul Graham's book "On Lisp", which is available for
download at his site. Other examples are probably in the Screamer
system by Jeffrey Mark Siskind.


Paolo
--
Paolo Amoroso <amoroso at mclink.it>
Jeremy H. Brown
2003-10-03 17:53:05 UTC
Permalink
This is actually a pretty good list. I'm not commenting on
...
Implementations >10 ~4
===========================================^
See http://alu.cliki.net/Implementation - it lists 9 commercial
implementations, and 7 opensource implementations. There are
probably more.
Thanks. I hadn't realized the spread was that large.
Performance "worse" "better"
Standards IEEE ANSI
Reference name R5RS CLTL2
============================================^
No, CLtL2 is definitely _not_ a reference for ANSI Common Lisp.
It was a snapshot taken in the middle of the ANSI process, and
is out of date in several areas. References which are much closer
to the ANSI spec can be found online at
http://www.franz.com/support/documentation/6.2/ansicl/ansicl.htm
or
http://www.lispworks.com/reference/HyperSpec/Front/index.htm
Thanks again.
Try them both, see which one works for you in what you're doing.
Agreed, but of course, I'd recommend CL :-)
I've arrived at the conclusion that it depends both on your task/goal
and your personal inclinations.

Jeremy
Mark Brady
2003-10-03 18:50:22 UTC
Permalink
"Mark Brady" <kalath at lycos.com> wrote in message
This whole thread is a bad idea.
I could agree that the OP's suggestion is a bad idea but do you
actually think that discussion and more publicity here for Lisp/Scheme
is bad? You make a pretty good pitch below for more Python=>Lisp
converts.
You are right of course, however I dislike cross posting and I also
dislike blatantly arguing with people over language choice. I would
prefer to lead by example. I think one good program is worth a
thousand words. For example people listen to Paul Graham
(http://www.paulgraham.com/avg.html) when he advocates Common Lisp
because he wrote Viaweb using it and made a fortune thanks to Lisp's
features (details in the link).
If you like python then use python.
As I plan to do.
Nothing wrong with that. Most people on these groups would agree that
Python is a very good choice for a wide range of software projects and
it is getting better with every release.

I think that if you can get over S-exps then Scheme and Common Lisp
feel very like python. I would recommend Pythonistas to at least
experiment with Common Lisp or Scheme even if you are perfectly happy
with Python. After all you have nothing to lose. If you don't like it
then fine you always have Python and you've probably learned something
and if you do like it then you have another language or two under your
belt.
Personally I find Scheme and Common Lisp easier to read but that's
just me, I prefer S-exps and there seems to be a rebirth in the
cheme
and Common Lisp communities at the moment. Ironically this seems to
have been helped by python. I learned python then got interested in
it's functional side and ended up learning Scheme and Common Lisp. A
lot of new Scheme and Common Lisp developers I talk to followed the
same route. Python is a great language and I still use it for some
things.
Other Lispers posting here have gone to pains to state that Scheme is
not a dialect of Lisp but a separate Lisp-like language. Could you
give a short listing of the current main differences (S vs. CL)? If I
were to decide to expand my knowledge be exploring the current
versions of one(I've read the original SICP and LISP books), on what
basis might I make a choice?
Terry J. Reedy
This is a difficult question to answer. It's a bit like trying to
explain the differences between Ruby and Python to a Java developer
;-)

*Personally* I find it best to think of Scheme and Common Lisp as two
different but very closely related languages. The actual languages and
communities are quite different.

Common Lisp is a large, very pragmatic, industrial strength language
and its community reflects this. Common Lisp has loads of features
that you would normally only get in add on libraries built right into
the language, it's object
system "CLOS" has to be experienced to be believed and its macro
system is stunning. Some very smart people have already put years of
effort into making it capable of great things such as Nasa's award
winning remote agent software
(http://ic.arc.nasa.gov/projects/remote-agent/).

Scheme is a more functional language and unlike Common Lisp is has a
single namespace for functions and variables (Python is like Scheme in
this regard). Common Lisp can be just as functional but on the whole
the Scheme community seem to embrace functional programming to a
greater extend.

Scheme is like python in that the actual language is quite small and
uses libraries for many of the same tasks Python would use them for,
unlike Common Lisp that has many of these features built into the
language. It also has a great but slightly different macro system
although every implementation I know also has Common Lisp style
Macros.

Scheme doesn't have a standard object system (it's more functional)
but has libraries to provide object systems. This is very hard to
explain to python developers, scheme is kind of like a big python
metaclass engine where different object systems can be used at will.
It's better than I can describe and it is really like a more powerful
version of Pythons metaclass system.

Pythonistas who love functional programming may prefer Scheme to
Common Lisp while Pythonistas who want a standard amazing object
system and loads of built in power in their language may prefer Common
Lisp.

To be honest the these tutorials will do a far better job than I
could:

For Scheme get DrScheme:
http://www.drscheme.org/

and go to

'Teach yourself scheme in fixnum days' :
http://www.ccs.neu.edu/home/dorai/t-y-scheme/t-y-scheme.html


For Common Lisp get the trial version of Lispworks:
http://www.lispworks.com/

and go get Mark Watsons free web book:
http://www.markwatson.com/opencontent/lisp_lic.htm

Regards,
Mark.

Ps. If anyone spots a mistake in this mail please correct me, it will
have been an honest one and not an attempt to slander your favourite
language and I will be glad to be corrected, in other words there is
no need to flame me :)
Alex Martelli
2003-10-11 23:19:54 UTC
Permalink
Kenny Tilton wrote:
...
The very 'feature' that was touted by Erann Gat as macros' killer
advantage in the WITH-CONDITION-MAINTAINED example he posted is the
crucial difference: functions (HO or not) and classes only group some
existing code and data; macros can generate new code based on examining,
and presumably to some level *understanding*, a LOT of very deep things
about the code arguments they're given.
Stop, your scaring me. You mean to say there are macros out there whose
output/behavior I cannot predict? And I am using them in a context where
I need to know what the behavior will be? What is wrong with me? And
what sort of non-deterministic macros are these, that go out and make
their own conclusions about what I meant in some way not documeted?
Let's start with that WITH-CONDITION-MAINTAINED example of Gat. Remember
it? OK, now, since you don't appear to think it was an idiotic example,
then SHOW me how it takes the code for the condition it is to maintain and
the (obviously very complicated: starting a reactor, operating the reactor,
stopping the reactor -- these three primitives in this sequence) program
over which it is to maintain it, and how does it modify that code to ensure
this purpose. Surely, given its perfectly general name, that macro does not
contain, in itself, any model of the reactor; so it must somehow infer it
(guess it?) from the innards of the code it's analyzing and modifying.

Do you need to know what the behavior will be, when controlling a reactor?
Well, I sort of suspect you had better. So, unless you believe that Gat's
example was utterly idiotic, I think you can start explaining from right
there.
I think the objection to macros has at this point been painted into a
very small corner.
I drastically disagree. This is just one example, that was given by one of
the most vocal people from your side, and apparently not yet denounced
as idiotic, despite my asking so repeatedly about it, so presumably agreed
with by your side at large. So, I'm focusing on it until its import is
clarified. Once that is done, we can tackle the many other open issues.

For example, the fact that Gat himself says that if what I want to write
are normal applications, macros are not for me: only for those who want
to push the boundaries of the possible are they worthwhile. Do you think
THAT is idiotic, or wise? Please explain either the reason of the drastic
disagreements in your camp, or why most of you do keep trying pushing
macros (and lisp in general) at those of us who are NOT particularly
interested in "living on the edge" and running big risks for their own sake,
accordingly to your answer to the preceding question, thanks.

"Small corner"?! You MUST be kidding. Particularly given that so many
on your side don't read what I write, and that you guys answer the same
identical questions in completely opposite ways (see below for examples
of both), I don't, in fact, see how this stupid debate will ever end, except
by exhaustion. Meanwhile, "the objection to macros" has only grown
larger and larger with each idiocy I've seen spouted in macros' favour,
and with each mutual or self-contradiction among the macros' defenders.
...If all you do with your macros is what you could
do with HOF's, it's silly to have macros in addition to HOF's
There is one c.l.l. denizen/guru who agrees with you. I believe his
...and there's another who has just answered in the EXACTLY opposite
way -- that OF COURSE macros can do more than HOF's. So, collectively
speaking, you guys don't even KNOW whether those macros you love so
much are really necessary to do other things than non-macro HOFs allow
(qualification inserted to try to divert the silly objection, already made
by others on your side, that macros _are_ functions), or just pretty things
up a little bit. Would y'all mind coming to some consensus among you
experienced users of macros BEFORE coming to spout your wisdom over
to us poor benigthed non-lovers thereof, THANKYOUVERYMUCH...?
far from all). This is far from the first time I'm explaining this, btw.
Oh. OK, now that you mention it I have been skimming lately.
In this case, I think it was quite rude of you to claim I was not answering
questions, when you knew you were NOT READING what I wrote.


As you claim that macros are just for prettying things up, I will restate
(as you may not have read it) one of the many things I've said over and
over on this thread: I do not believe the minor advantage of prettying
things up is worth the complication, the facilitation of language
divergence between groups, and the deliberate introduction of multiple
equivalent ways to solve the same problem, which I guess you do know
I consider a bad thing, one that impacts productivity negatively.


Alex
Alex Martelli
2003-10-11 16:44:07 UTC
Permalink
prunesquallor at comcast.net wrote:
...
A single web browser?
I far prefer to have on my cellphone one that's specialized for its small
screen and puny cpu/memory, and on more powerful computers, more
powerful browsers. Don't you?
Cellphone? Why on earth would I want one of those?
Why would you want a cellphone, or why would you want a browser on it?
As for the former, many people I know first got their cellphones because
their parents were old and sick, and they wanted to be sure their parents
could immediately get in touch with them in case of need. Of course, not
knowing you at all, I can't tell whether you're an orphan, or have parents
who are young and healthy, or don't care a whit about their being able to
reach you -- whatever. Once you do have a cellphone for whatever reason,
why not get more use out of it? Checking whether reports for wherever
you're traveling to, etc, etc -- a browser's a decent way to do many such
things.
A single operating system?
On my cellphone all the way to the datacenter?
A cellphone with an OS?! The phone I use is implemented
with *wires*.
Forget cellphones then, and let's check if we DO want a single operating
system over a huge range of computers serving enormously different
tasks, as Microsoft is so keen to tell us, or not.

What do I want of the OS running my firewall? Security, security, security.
It's nice if it can run on very scant resources, offers solid and usable
packet filtering, and has good, secure device drivers available for the
kind of devices I may want in a computer dedicated to firewalling -- all
sorts of ethernet cards, wifi thingies, pppoe and plain old ppp on ISDN for
emergency fallback if cable service goes down, UPS boxes, a serial console
of course, perhaps various storage devices, and that's about it.

I see no reason why I should use anything but OpenBSD for that. Right now
I'm running it, for a LAN of half a dozen desktops and 2-5 laptops, on an
old scavenged Pentium-75, 32MB RAM, 1GB disk ISA machine, and it's got so
many resources to spare that I ended up running services aplently on it too
(DHCP, DNS, ntp, Squid for proxying, ...).

What do I want of the OS running my desktop? Resources are not a real
problem, since throwing CPU cycles, RAM, and disk at it is so dirt-cheap.
Some security, just enough to avoid most of the pesky worms and viruses
going around. Lots and LOTS of apps, and lots and LOTS of drivers for all
sort of cool devices for video, audio, and the like. Linux is pretty good
there, though I understand Windows can be useful sometimes (drivers
aren't yet available for _every_thing under Linux) even though security is
awful there, and MacOS/X would be cool was it not for HW cost. For a small
server? Resources should not be eaten up by the OS but available for
serving the rest of the LAN -- lots of free server-side apps & proxies --
security important, device drivers so-so -- I can see either Linux,
OpenBSD, or FreeBSD being chosen there.

A LARGE server, were I to need one? A Linux cluster, or an IBM mainframe
able to run Linux at need on virtual machines, sound better then.

A laptop? A palmtop? Linux may cover some of those (I do enjoy it on my
Zaurus) but is really too demanding for the cheapest palmtops -- and I
still can't get good sleep (as opposed to hybernate) from it on laptops
with ACPI.

What about the several computers in my car? They play very specialized
roles and would get NO advantages from general-purpose OS's -- and on
the other hand, most of them REALLY need hard real-time OS's to do their
jobs. I think not even MS tries to push Windows into most of THOSE
computers -- it would be just too crazy even for them.

So, in practice, I'd never go with the same OS on ALL computers. What
is most needed on computers playing such widely different roles is just
too different: an OS trying to cover ALL bases would be so huge, complicated
and unwieldy that its security AND general bugginess would suck (please
notice that Windows is the only OS really trying, even though not really on
anywhere near the WHOLE gamut -- with Linux admittedly close behind;-).


Alex
David Mertz
2003-10-07 18:13:39 UTC
Permalink
|> def posneg(filter,iter):
|> results = ([],[])
|> for x in iter:
|> results[not filter(x)].append(x)
|> return results
|> collect_pos,collect_neg = posneg(some_property, some_file_name)

Pascal Costanza <costanza at web.de> wrote previously:
|What about dealing with an arbitrary number of filters?

Easy enough:

def categorize_exclusive(filters, iter):
results = tuple([[] for _ in len(filters)])
for x in iter:
for n, filter in enumerate(filters):
if filter(x):
results[n].append(x)
break
return results

Or if you want to let things fall in multiple categories:

def categorize_inclusive(filters, iter):
results = tuple([[] for _ in len(filters)])
for x in iter:
for n, filter in enumerate(filters):
if filter(x):
results[n].append(x)
return results

Or if you want something to satisfy ALL the filters:

def categorize_compose(filters, iter):
results = tuple([[] for _ in len(filters)])
for x in iter:
results[compose(filters)(x)].append(x)
return results

The implementation of 'compose()' is left as an exercise to readers :-).
Or you can buy my book, and read the first chapter.

Yours, David...

--
Keeping medicines from the bloodstreams of the sick; food from the bellies
of the hungry; books from the hands of the uneducated; technology from the
underdeveloped; and putting advocates of freedom in prisons. Intellectual
property is to the 21st century what the slave trade was to the 16th.
--
Buy Text Processing in Python: http://tinyurl.com/jskh
Steve Williams
2003-10-12 03:37:09 UTC
Permalink
Alex Martelli wrote:
[snip]
but we DO want to provide very clear and precise error diagnostics, of course,
and the language/metalanguage issue is currently open). You will note
that this use of macros involves none of the issues I have expressed about
them (except for the difficulty of providing good error-diagnostics, but
that's of course solvable).
finally, a breath of fresh air.
David Eppstein
2003-10-06 17:46:13 UTC
Permalink
In article
<my-first-name.my-last-name-0610030955090001 at k-137-79-50-101.jpl.nasa.go
v>,
: (with-collector collect
: (do-file-lines (l some-file-name)
: (if (some-property l) (collect l))))
: This returns a list of all the lines in a file that have some property.
OK, that's _definitely_ just a filter: filter someproperty somefilename
Perhaps throw in a fold if you are trying to abstract "collect".
The net effect is a filter, but again, you need to stop thinking about the
"what" and start thinking about the "how", otherwise, as I said, there's
no reason to use anything other than machine language.
Answer 1: literal translation into Python. The closest analogue of
with-collector etc would be Python's simple generators (yield keyword)
and do-with-file-lines is expressed in python with a for loop. So:

def lines_with_some_property(some_file_name):
for l in some_file_name:
if some_property(l):
yield l

Your only use of macros in this example is to handle the with-collector
syntax, which is handled in a clean macro-free way by Python's "yield".
So this is unconvincing as a demonstration of why macros are a necessary
part of a good programming language.

Of course, with-collector could be embedded in a larger piece of code,
while using yield forces lines_with_some_property to be a separate
function, but modularity is good...

Answer 2: poetic translation into Python. If I think about "how" I want
to express this sort of filtering, I end up with something much less
like the imperative-style code above and much more like:

[l for l in some_file_name if some_property(l)]

I have no problem with the assertion that macros are an important part
of Lisp, but you seem to be arguing more generally that the lack of
macros makes other languages like Python inferior because the literal
translations of certain macro-based code are impossible or more
cumbersome. For the present example, even that argument fails, but more
generally you'll have to also convince me that even a freer poetic
translation doesn't work.
--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
Tim Hochberg
2003-10-15 20:15:36 UTC
Permalink
Apparently I was mistaken, at least for modern lisps. I haven't
programmed in lisp for a long time; the most recent lisp manual I
can find at hand is a 1981 Chinual, which still seems to be using
dynamic scoping and requires an explicit function call to create a
closure.
Fair enough. But then we should compare this Lisp to the state of
Python around 1981... :)
Nope, with python in 2010 -- at that point, Python will be about as old
as Lisp was in '81.

-tim
Edi.
Jock Cooper
2003-10-15 17:09:04 UTC
Permalink
In article <eppstein-434779.17173814102003 at news.service.uci.edu>,
Isn't it true though that the lambda can only contain a single expression
and no statements? That seems to limit closures somewhat.
It limits lambdas. It doesn't limit named functions. Unlike lisp, a
Python function definition can be nested within a function call, and the
inner function can access variables in the outer function's closure.
To clarify, by "unlike lisp" I meant only that defun doesn't nest (at
least in the lisps I've programmed) -- of course you could use flet, or
bind a variable to a lambda, or whatever.
Ok so in Python a function can DEF another function in its body. I assume
this can be returned to the caller. When you have a nested DEF like that,
is the nested function's name globally visible?
Pascal Costanza
2003-10-08 11:04:56 UTC
Permalink
"Pascal Costanza" <costanza at web.de> wrote in message
Post by David Mertz
What about dealing with an arbitrary number of filters?
[macro snipped]
What about it? Using macros for somewhat simple functions stikes me
as overkill.
You're right. The use of with-collectors makes it more appropriate to
express it as a macro, but of course, one can use a simple function when
you don't stick to with-collectors.
Post by David Mertz
(predicate-collect '(-5 -4 -3 -2 -1 0 1 2 3 4 5)
(function evenp)
(lambda (n) (< n 0))
(lambda (n) (> n 3)))
(-4 -2 0 2 4)
(-5 -3 -1)
(5)
(1 3)
predn = len(preds)
bins = [[] for i in range(predn+1)]
predpends = [(p,b.append) for (p,b) in zip(preds,bins)]
rpend = bins[predn].append
pend(item)
break
else: rpend(item)
return bins
i>3)
[[-4, -2, 0, 2, 4], [-5, -3, -1], [5], [1, 3]]
For the sake of completeness, here is the Lisp version:

(defun predicate-collect (list &rest predicates)
(let ((table (make-hash-table))
(preds (append predicates
(list (constantly t)))))
(dolist (elem list)
(loop for pred in preds
until (funcall pred elem)
finally (push elem (gethash pred table))))
(mapcar (lambda (pred)
(nreverse (gethash pred table)))
preds)))


? (predicate-collect
'(-5 -4 -3 -2 -1 0 1 2 3 4 5)
(function evenp)
(lambda (n) (< n 0))
(lambda (n) (> n 3)))

((-4 -2 0 2 4) (-5 -3 -1) (5) (1 3))


Pascal
Björn Lindberg
2003-10-09 10:14:05 UTC
Permalink
So either the syntax doesn't make a whole hell of a lot of difference
in readability, or readability doesn't make a whole hell of a lot of
difference in utility.
Or the people who prefer the awesome power that is Lisp and
Scheme don't find the limited syntax to be a problem.
All evidence points to the fact that Lisp syntax is no worse than
Algol-style syntax. As Joe explained, other syntaxes have been used
for Lisp many times over the years, but lispers seem to prefer the
s-exp one. If anything, one could draw the conclusion that s-exp
syntax must be /better/ than Algol-style syntax since the programmers
who have a choice which of them to use -- for the same language --
apparently choose s-exp syntax. You really have no grounds to call
Lisp syntax limited.


Bj?rn
A.M. Kuchling
2003-10-10 01:27:49 UTC
Permalink
On Thu, 09 Oct 2003 20:56:01 GMT,
or not. The similar statement which bristles me the most is at the
top of biolisp.org, ...
Yee-*ouch*. For those not reading the site: it modifies an aphorism from
Henry Spencer to "Those who do not know Lisp are condemned to reinvent
it.... poorly," and hyperlinks it to biopython.org. Tacky. Sheesh, the
Perl/Python squabbling has pretty much died down these days; maybe they
missed the memo.

--amk
Bengt Richter
2003-10-16 02:57:14 UTC
Permalink
|> |Come on. Haskell has a nice type system. Python is an application of
|> |Greespun's Tenth Rule of programming.
|> Btw. This is more nonsense. HOFs are not a special Lisp thing. Haskell
|> does them much better, for example... and so does Python.
|Wow. The language with the limited lambda form, whose Creator regrets
|including in the language, is ... better ... at HOFs?
|You must be smoking something really good.
"Those who know only Lisp are doomed to repeat it (whenver they look at
another language)." It does a better job of getting at the actual
dynamic.
Is there anyone who knows only Lisp?
Those who know Lisp repeat it for a reason -- and it isn't because
it's all they know! [Besides, Greenspun's 10th isn't about _Lispers_
reinventing Lisp; it's about _everybody else_ reinventing Lisp]
In point of fact, Python could completely eliminate the operator
'lambda', and remain exactly as useful for HOFs. Some Pythonistas seem
to want this, and it might well happen in Python3000. It makes no
difference... the alpha and omega of HOFs is that functions are first
class objects that can be passed and returned. Whether they happen to
have names is utterly irrelevant, anonymity is nothing special.
True, but if you are forced to bind to a name in order to get hold of some
first class functions but not others, then some are IMO "more equal than others."
And unless you allow assignments in expressions, def foo():... will be excluded,
because it assigns/binds foo in the def evaluation context, whereas lambda doesn't.

For simple functions, things look a lot the same, e.g.,
... def foo(x): return x+1
... bar = lambda x: x+1
... return foo,bar
...
import dis
dis.dis(show)
2 0 LOAD_CONST 1 (<code object foo at 009033A0, file "<stdin>", line 2>)
3 MAKE_FUNCTION 0
6 STORE_FAST 0 (foo)

3 9 LOAD_CONST 2 (<code object <lambda> at 009034A0, file "<stdin>", line 3>)
12 MAKE_FUNCTION 0
15 STORE_FAST 1 (bar)

4 18 LOAD_FAST 0 (foo)
21 LOAD_FAST 1 (bar)
24 BUILD_TUPLE 2
27 RETURN_VALUE
28 LOAD_CONST 0 (None)
31 RETURN_VALUE
foo,bar = show()
dis.dis(foo)
2 0 LOAD_FAST 0 (x)
3 LOAD_CONST 1 (1)
6 BINARY_ADD
7 RETURN_VALUE
8 LOAD_CONST 0 (None)
11 RETURN_VALUE
dis.dis(bar)
3 0 LOAD_FAST 0 (x)
3 LOAD_CONST 1 (1)
6 BINARY_ADD
7 RETURN_VALUE

The difference in the code generated seems to be mainly that lambda is guaranteed to have
a return value expression at the end, so there is no return case to solve by default boilerplate,
and it is left out.
(lambda
... x
... :
... x
... +
... 1
... )
<function <lambda> at 0x008FDE70>

(because indentation is ignored inside brackets). This obviously now precludes lambda bodies that
are dependent on indented code suites, and makes it (so far) impossible to put def foo():pass
inside expression brackets. But this is surface stuff w.r.t. the definition of the code body IMO.

The name part does make a difference, however, because it amounts to a forced assignment (note that
that you get MAKE_FUNCTION followed by STORE_FAST whether you assign a lambda expression "manually"
or do it by def. You get identical code (see above), but with lambda you don't have to have
a STORE_FAST LOAD_FAST to get the use of your function as "first class").
True enough. Naming things is a pain though. Imagine if you couldn't
use numbers without naming them: e.g., if instead of 2 + 3 you had to
do something like
two = 2
three = 3
two + three
Bleargh! It "makes no difference" in much the same way that using
assembler instead of Python "makes no difference" -- you can do the
same thing either one, but one way is enormously more painful.
I think it makes a semantic difference, not just convenience.
[Mind you, Python's lambda is next to useless anyway]
Well, even so, I would miss it, unless it's given a full life as a nameless def():
I don't think we can know if YAGNI will apply, since there is no current opportunity
for beautiful use cases to evolve or even viably to be conceived.

Regards,
Bengt Richter
Sean Ross
2003-10-07 13:21:04 UTC
Permalink
"Bengt Richter" <bokr at oz.net> wrote in message
Just thought of this: a generator list comprehension, so your could write
[snip]
Has someone proposed this already? I seems a natural, unless
I am blindly optimistic, which is quite possible ;-)
[snip]

Hi.
Yes, they've been proposed (by Raymond Hettinger, PEP289) and rejected
(http://www.python.org/peps/pep-0289.html).

Sean
Pascal Costanza
2003-10-10 00:46:00 UTC
Permalink
Why the smiley? Many hours of discussions could be spared if there
were real, scientific, solid studies on the benefit of certain
language features or languages in certain domains or for certain types
of programmers.
This presumes that language features can be judged in isolation. I think
it's rather more likely that good programming languages are holistic
systems, in the sense that the whole language is more than the sum of
its features.


Pascal
Hans Nowak
2003-10-05 01:20:40 UTC
Permalink
|shocked at how awkward Paul Graham's "accumulator generator" snippet is
| self.n = n
| self.n += i
| return self.n
Me too. The way I'd do it is probably a lot closer to the way Schemers
... accum[0]+=i
... return accum[0]
...
foo(1)
1
foo(3)
4
Shorter, and without an awkward class.
Yah, but instead it abuses a relatively obscure Python feature... the fact that
default arguments are created when the function is created (rather than when it
is called). I'd rather have the class, which is, IMHO, a better way to
preserve state than closures. (Explicit being better than implicit and all
that... :-)
--
Hans (hans at zephyrfalcon.org)
http://zephyrfalcon.org/
Hartmann Schaffer
2003-10-14 16:38:35 UTC
Permalink
In article <raffaelcavallaro-583E78.11332714102003 at netnews.attbi.com>,
(flet ((add-offset (x) (+ x offset)))
(map 'list #'add-offset some-list))
But flet is just lambda in drag. I mean real, named functions, with
(add-offset the-list)
instead of either of theversions you gave.
the version he gave has the advantage that it doesn't clutter up the
namespace of the environment. with

(map (lambda (x) (+ x offset)) the-list)

you have everything that is relevant in the immediate neighborhood of
the statement. with a separate defun you have to search the program
to see what the function does. i agree that for lengthy functions
defining it separately and just writing the name is preferrable.
...
I guess I'm arguing that the low level implementation details should not
be inlined by the programmer, but by the compiler. To my eye, anonymous
functions look like programmer inlining.
i would call this a misconception

hs
--
ceterum censeo SCO esse delendam
Robin Becker
2003-10-11 23:16:17 UTC
Permalink
In article <gGWhb.200390$hE5.6777507 at news1.tin.it>, Alex Martelli
<aleaxit at yahoo.com> writes
...
bombs waiting to go off since they have long standing prior meanins
not in any way associated with this type of operation. OTOH, if you
really wanted them, you could define them.
Is it a good thing that you can define "bombs waiting to go off"?
Python's reply "There should be one-- and preferably only one --
obvious way to do it."
This then is probably the best reason to _not_ use Python for anything
other than the trivial. It has long been known in problem solving
(not just computation) that multiple ways of attacking a problem, and
shifting among those ways, tends to yield the the better solutions.
One, and preferably only one, of those ways should be the obvious one,
i.e., the best solution. There will always be others -- hopefully they'll
be clearly enough inferior to the best one, that you won't have to waste
too much time considering and rejecting them. But the obvious one
"may not be obvious at first unless you're Dutch".
The worst case for productivity is probably when two _perfectly
equivalent_ ways exist. Buridan's ass notoriously starved to death in
just such a worst-case situation; groups of programmers may not go
quite as far, but are sure to waste lots of time & energy deciding.
Alex
I'm not sure when this concern for the one true solution arose, but even
GvR provides an explicit example of multiple ways to do it in his essay
http://www.python.org/doc/essays/list2str.html

Even in Python there will always be tradeoffs between clarity and
efficiency.
--
Robin Becker
Steve VanDevender
2003-10-05 06:58:32 UTC
Permalink
Lisp (and possibly other languages I am not familiar with) adds the
alternative of *not* evaluating arguments but instead passing them as
unevaluated expressions. In other words, arguments may be
*implicitly* quoted. Since, unlike as in Python, there is no
alternate syntax to flag the alternate argument protocol, one must, as
far as I know, memorize/learn the behavior for each function. The
syntactic unification masks but does not lessen the semantic
divergence. For me, it made learning Lisp (as far as I have gotten)
more complicated, not less, especially before I 'got' what going on.
What you're talking about are called "special forms" and are definitely
not functions, and are used when it is semantically necessary to leave
something in an argument position unevaluated (such as in 'cond' or
'if', Lisp 'defun' or 'setq', or Scheme 'define' or 'set!').
Programmers create them using the macro facilities of Lisp or Scheme
rather than as function definitions. There are only a handful of
special forms one needs to know in routine programming, and each one has
a clear justification for being a special form rather than a function.

Lisp-family languages have traditionally held to the notion that Lisp
programs should be easily representable using the list data structure,
making it easy to manipulate programs as data. This is probably the
main reason Lisp-family languages have retrained the very simple syntax
they have, as well as why there is not different syntax for functions
and special forms.
Question: Python has the simplicity of one unified assignment
statement for the binding of names, attributes, slot and slices, and
multiples thereof. Some Lisps have the complexity of different
functions for different types of targets: set, setq, putprop, etc.
What about Scheme ;-?
Scheme has 'define', 'set!', and 'lambda' for identifier bindings (from
which 'let'/'let*'/'letrec' can be derived), and a number of mutation
operations for composite data types: 'set-car!'/'set-cdr!' for pairs,
'vector-set!' for mutating elements of vectors, 'string-set!' for
mutating strings, and probably a few others I'm forgetting.
--
Steve VanDevender "I ride the big iron" http://jcomm.uoregon.edu/~stevev
stevev at hexadecimal.uoregon.edu PGP keyprint 4AD7AF61F0B9DE87 522902969C0A7EE8
Little things break, circuitry burns / Time flies while my little world turns
Every day comes, every day goes / 100 years and nobody shows -- Happy Rhodes
Kenny Tilton
2003-10-04 18:55:57 UTC
Permalink
record as promising that the major focus in the next release
of Python where he can introduce backwards incompatibilities
(i.e. the next major-number-incrementing release, 3.0, perhaps,
say, 3 years from now) will be the _elimination_ of many of
the "more than one way to do it"s that have accumulated along
the years mostly for reasons of keeping backwards compatibility
(e.g., lambda, map, reduce, and filter,
Oh, goodie, that should win Lisp some Pythonistas. :) I wonder if Norvig
will still say Python is the same as Lisp after that.
Python draws a firm distinction between expressions and
statements. Again, the deep motivation behind this key
distinction can be found in several points in the Zen of
Python, such as "flat is better than nested" (doing away
with the expression/statement separation allows and indeed
encourages deep nesting) and "sparse is better than dense"
(that 'doing away' would encourage expression/statements
with a very high density of operations being performed).
In Lisp, all forms return a value. How simple is that? Powerful, too,
because a rule like "flat is better than nested" is flat out dumb, and I
mean that literally. It is a dumb criterion in that it does not consider
the application.

Take a look at the quadratic formula. Is that flat? Not. Of course
Python allows nested math (hey, how come!), but non-mathematical
computations are usually trees, too.

I was doing an intro to Lisp when someone brought up the question of
reading deeply nested stuff. It occurred to me that, if the computation
is indeed the moral equivalent of the quadratic formula, calling various
lower-level functions instead of arithmetic operators, then it is
/worse/ to be reading a flattened version in which subexpression results
are pulled into local variable, because then one has to mentally
decipher the actual hierarchical computation from the bogus flat sequence.

So if we have:

(defun some-vital-result (x y z)
(finally-decide
(if (serious-concern x)
(just-worry-about x z)
(whole-nine-yards x
(composite-concern y z)))))

...well, /that/ visually conveys the structure of the algorithm, almost
as well as a flowchart (as well if one is accustomed to reading Lisp).
Unwinding that into an artificial flattening /hides/ the structure.
Since when is that "more explicit"? The structure then becomes implicit
in the temp variable bindings and where they get used and in what order
in various steps of a linear sequence forced on the algotrithm.

I do not know what Zen is, but I do now that is not Zen.

Yes, the initial reaction of a COBOL programmer to a deeply nested form
is "whoa! break it down for me!". But that is just lack of familiarity.
Anyone in a reasonable amount of time can get used to and then benefit
from reading nested code. Similarly with every form returning a
value...the return statement looks silly in pretty short order if one
spends any time at all with a functional language.


kenny
Ingvar Mattsson
2003-10-06 09:29:45 UTC
Permalink
I think everyone who used Python will agree that its syntax is
the best thing going for it.
I've used Python. I don't agree.
I'd be interested to hear your reasons. *If* you take the sharp distinction
that python draws between statements and expressions as a given, then python's
syntax, in particular the choice to use indentation for block structure, seems
to me to be the best choice among what's currently on offer (i.e. I'd claim
that python's syntax is objectively much better than that of the C and Pascal
descendants -- comparisons with smalltalk, prolog or lisp OTOH are an entirely
different matter).
(I'm ignoring the followup-to because I don't read comp.lang.python)
Indentation-based grouping introduces a context-sensitive element into
the grammar at a very fundamental level. Although conceptually a
block is indented relative to the containing block, the reality of the
situation is that the lines in the file are indented relative to the
left margin. So every line in a block doesn't encode just its depth
relative to the immediately surrounding context, but its absolute
depth relative to the global context. Additionally, each line encodes
this information independently of the other lines that logically
belong with it, and we all know that when some data is encoded in one
place may be wrong, but it is never inconsistent.
There is yet one more problem. The various levels of indentation
encode different things: the first level might indicate that it is
part of a function definition, the second that it is part of a FOR
loop, etc. So on any line, the leading whitespace may indicate all
sorts of context-relevant information. Yet the visual representation
is not only identical between all of these, it cannot even be
displayed.
It's actually even worse than you think. Imagine you want "blank
lines" in your code, so act as paragraph separators. Do these require
indentation, even though there is no code on them? If so, how does
that interact with a listener? From what I can tell, the option chosen
in the Python (the language) community, the listener and the file
reader have different view on blank lines. This makes it harder than
necessary to edit stuff in one window and "just paste" code from
another. Bit of a shame, really.

//ingvar
--
When it doesn't work, it's because you did something wrong.
Try to do it the right way, instead.
Kaz Kylheku
2003-10-15 18:32:31 UTC
Permalink
Secondly, it would be unoptimizeable. The result of evaluating a
lambda expression is an opaque function object. It can only be called.
This is not true. When the compiler sees the application of a lambda,
it can inline it and perform further optimizations, fusing together
its arguments, its body and its context.
Kindly keep in mind the overall context of the discussion, which is
HOF's versus macros. The closures being discussed are ones passed down
into functions. Those closures typically cannot be inlined, except
under very special circumstances taken advantage of by a compiler with
very smart global optimizations.
If a macro works in a particular situation, then any equivalent HOF
can be inlined there as well. Granted, not all compilers will
actually do so, but the possibility trivially exists. This does not
depend on "very smart" global optimizations.
That is a case of bringing all of the function into the present
context so it can be mixed with the closures manually created there.
A macro controls how much material is inlined and how much isn't.
What I said here is silly because a HOF can also be structured to have
inlined and noninlined parts: an inlined skeleton function that calls
non-inlined functions.
A macro can have its own binary interface between the expanded
material and some material in a library that is associated with the
macro.
The macro decides what forms need to be made into a closure that is
passed to the library and which are not.
This is the real point; in a Lisp macro, material from parameters is
usually spun into closures precisely because inlining is not wanted;
it's a compromise which allows some remote function to call back into
the present lexical environment, allowing the macro to generate less
inlined bloat.

In functional languages with lazy evaluation, evey expression is
effectively a closure. When you call some function f with argument x,
that x is effectively a lambda function which returns the value of x
in its original environment.

When the argument to a function is an expression, it need not be
evaluated prior to the call; it may be delayed until the last possible
moment when its value is actually needed, and it may never be
evaluated at all.

So it seems as if higher order functions have the evaluation control
of macros. But delaying evaluation is not the same as control over
evaluation. Moreover, a macro is not just some mechanism for delaying
the evaluation of expressions; it's a mechanism for assigning an
arbitrary meaning to expressions.

A HOF can generate code only by interpolating parameters into a fixed
body. The meaning of everything stays the same. A HOF can trivially
replace a macro under two conditions:

1. The macro is used as an interface to hide lambdas from the user.
This is not necessary if the language has lazy evaluation. Every
expression is already a lambda, capable of being passed to a foreign
environment and evaluated there, with all the hooks to the original
environment.

2. The macro is used as a trivial inlining mechanism, which
substitutes expressions into a template. HOF inlining effectively does
the same thing. Since evaluation is lazy, there is little or no
semantic difference between evaluating parameters and calling a
function, or inserting the unevaluated expressions into the body and
inlining it.

In Common Lisp, we don't have the lazy evaluation semantics, so
certain macro uses that fall under 1. are essential. People who argue
against these always use HOF examples from another programming
language. But CL programmers don't want to switch to another
programming language. You can't switch programming languages while
holding everything else constant.

Certain macro uses in Lisp that seem to fall under 2. are also
essential; some macros that do nothing more than substitute their
arguments into some functional template nevertheless have imperative
semantics, such as conditional or repeated evaluation of some forms.
When Lisp programmers demonstrate these uses, again, they are
countered by examples from a different programming language. ``Gee, if
you only had virtual sequences (or whatever gadget), you wouldn't need
that imperative DO- stuff.'' But now you are no longer talking about
just HOF's, but HOF's plus virtual sequences.

In general, for every macro use, you could design a programming
language feature, such as higher order functions, lazy sequences and
whatnot, and then claim that the macro use is unnecessary when you
have that feature. The problem is that there is no end to this
progression. Macros can implement anything you want (arbitrary syntax
to semantics mapping), and that anything can always be given a name
and codified in some offshoot academic language under a coat of
hard-coded syntax, and you can then claim that you eliminated macros
once and for all for all situations that you (and by implication
everyone else) cares about.
Alexander Schmolck
2003-10-08 17:22:31 UTC
Permalink
Well, I would say that kanji is badly designed, compared to latin
alphabet. The voyels are composed with consones (with diacritical
marks) and consones are written following four or five groups with
additional diacritical marks to distinguish within the groups. It's
more a phonetic code than a true alphabet.
Huh? You seem to be confused (BTW French is misleading here: it's vowels and
consonants in English). *Kanji* are not phonetic, you seem to be talking about
*kana*. And the blanket claim that Japanese spelling in kana is badly designed
compared to say, English orthography seems really rather dubious to me.

'as
unknown
2003-10-09 00:24:52 UTC
Permalink
I think Python's problem is its success. Whenever something is
succesful, the first thing people want is more features. Hell, that is
how you know it is a success. The BDFL still talks about simplicity,
but that is history. GvR, IMHO, should chased wish-listers away with
"use Lisp" and kept his gem small and simple.
That's silly. Something being successful means people want to use it
to get things done in the real world. At that point they start
needing the tools that other languages provide for dealing with the
real world. The real world is not a small and simple place, and small
simple systems are not always enough to cope with it. If GVR had kept
his gem small and simple, it would have remained an academic toy, and
I think he had wide-reaching ambitions than that.
Kenny Tilton
2003-10-09 23:47:53 UTC
Permalink
|Do they ever plan to do a compiler for it [Python]?
You mean like Psyco?
Been there, done that. (and it's great, and getting better).
Oh, excellent name. OK, the context was "what are Python's aspirations?"
. Is Python now no longer content to be a (very powerful) scripting
language? Or is it just tired of being ragged on for being slow?
--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
Alex Martelli
2003-10-10 16:54:46 UTC
Permalink
Lulu of the Lotus-Eaters wrote:
...
Python never had an "aspiration" of being "just a scripting language",
Hmmm, at the start it sure seemed like that. Check out

http://www.python.org/search/hypermail/python-1992/0001.html

actually from late '91. By 1991 Guido was already calling Pyhton
"a prototyping language" and "a programming language" (so, the
"just a scripting language" was perhaps only accurate in 1990), but
in late '91 he still wrote:

"""
The one thing that Python definitely does not want to be is a GENERAL
purpose programming language. Its lack of declarations and general laziness
about compile-time checking is definitely aimed at small-to-medium-sized
programs.
"""

Apparently, it took us (collectively speaking) quite a while to realize
that the lack of declarations and compile-time checks aren't really a
handicap for writing larger programs (admittedly, Lispers already knew
it then -- so did I personally, thanks also to experiences with e.g.
Rexx -- but I didn't know of Python then). So, it _is_ historically
interesting to ascertain when the issue of large programs first arose.
nor WAS it ever such a thing. From its release, Python was obviously a
language very well suited to large scale application development (as
Well, clearly that was anything but obvious to Guido, from the above
quote. Or maybe you mean by "release" the 1.0.0 one, in 1994? At
that time, your contention becomes quite defensible (though I can't
find a Guido quote to support it, maybe I'm just not looking hard
enough), e.g. http://www.python.org/search/hypermail/python-1994q1/0050.html
where Bennett Todd muses
"""
I think Python will outstrip every other language out there, and Python
(extended where necessary in C) will be the next revolutionary programming
tool ... Perl seems (in my experience) to be weak for implementing large
systems, and having them run efficiently and be clear and easy to maintain.
I hope Python will do better.
"""
So, here, the idea or hope that Python "will do better" (at least wrt
Perl) "for implementing large systems" seems already in evidence, though
far from a community consensus yet.


I do find it fascinating that such primary sources are freely available
on the net -- a ball for us amateur historians...!-)



Alex
Terry Reedy
2003-10-10 02:15:21 UTC
Permalink
"Kenny Tilton" <ktilton at nyc.rr.com> wrote in message
Post by Kenny Tilton
You mean like Psyco?
Been there, done that. (and it's great, and getting better).
Oh, excellent name.
PYthon Specializing COmpiler, slightly permuted. Also somehow apt for
something that dynamically compiles bytecode to type-specific machine
code. Many of us did not quite believe it until the pudding was
cooked.
Post by Kenny Tilton
OK, the context was "what are Python's aspirations?"
. Is Python now no longer content to be a (very powerful) scripting
language?
Prime focus is still correct and readable code with execution speed
somewhat secondary but not ignored. The interpreter slowly speeds up;
more compiled C extension modules appear; other compilation options
appear; and 3 gig processors run Python about as fast as P125s did
with compiled C.
Post by Kenny Tilton
Or is it just tired of being ragged on for being slow?
I suspect that that pushes a bit too, but I can't speak for anyone in
particular.

Terry J. Reedy
Alexander Schmolck
2003-10-15 17:33:43 UTC
Permalink
If for some reason you believe that macros will have a different
effect--perhaps decreasing simplicity, clarity, and directness then
I'm not surprised you disapprove of them. But I'm not sure why you'd
think they have that effect.
Well, maybe he's seen things like IF*, MVB, RECEIVE, AIF, (or as far as
simplicity is concerned LOOP)...?
I'm not saying that macros always have ill-effects, but the actual
examples above demonstrate that they *are* clearly used to by people
to create idiosyncratic versions of standard functionality. Do you
really think clarity, interoperability or expressiveness is served
if person A writes MULTIPLE-VALUE-BIND, person B MVB and person C
RECEIVE?
Yes. But that's no different with macros than if someone decided that
(defun begin (list) (first list))
(defun end (list) (rest list))
The reason it's different is that people apparently engage in the former (we
seem to agree undesirable) behavior, but not in the latter. I suppose this is
because programmers generally feel that wasting CPU cycles is a bad thing (I
know you could DECLAIM INLINE the above in CL). Also, you can't do things like
AIF as a function.
As almost everyone who has stuck up for Lisp-style macros has
said--they are just another way of creating abstractions and thus, by
necessity, allow for the possibility of people creating bad
abstractions.
I guess so much everyone agrees on:)

The point that the lisp side seems to under-appreciate is that (even powerful)
abstractions have their limitations and also come at a cost. This cost should
be carefully analyzed because it is not always worth paying (and even if it
is, as is surely the case for macros in CL, awareness of the downside might
help minimize it).

Unfortunately, I'm pretty worn out by this thread now and have to catch up
with my work, so I won't go into detail.
But if I come up with examples of bad functional abstractions or poorly
designed classes, are you going to abandon functions and classes? Probably
not.
Depends on the *implementation* of those constructs themselves and *which*
features (e.g. multiple inheritance). I'd forgo C++ for some saner language
without built-in OO anytime.
It really is the same thing.
Would you like to see continuations in CL? If not, note that your argument
would be equally applicable.
(deftest foo-tests ()
(check
(= (foo 1 2 3) 42)
(= (foo 4 5 6) 99)))
Note that this is all about the problem domain, namely testing.
I think the example isn't a bad one, in principle, in practice
however I guess you could handle this superiorly in python.
Well, I admire your faith in Python. ;-)
It would be less a question of faith, but more an incorrect conclusion.
# like python's unittest.TestCase, only that it doesn't "disarm"
# exceptions
TestCase = awmstest.PermeableTestCase
#TestCase = unittest.TestCase
...
assert foo(1,2,3) = 42
assert foo(4,5,6) = 99
raise an Exception, the debugger is entered and emacs automatically
will jump to the right source file and line of code (I am not
mistaken in thinking that you can't achieve this using emacs/CL,
right?)
No, you're mistaken. In my test framework, test results are signaled
with "conditions" which are the Common Lisp version of exceptions. Run
in interactive mode, I will be dropped into the debugger at the point
the test case fails where I can use all the facilities of the debugger
Yep, I'm aware.
to figure out what went wrong including jumping to the code in
question
OK, I was under the maybe mistaken impression that this wasn't generally
possible (if by "jumping to the code" you mean actually jumping to *the line*
where the error occured, in your editor window).
examining stack framse, and then if I think I've figured out
the problem, I can redefine a function or two and retry the test case
and proceed with the rest of my test run with the fixed code.
Yes, restarts sure are handy for interactive development (and absent in
python).
(Obviously, after such a run you'd want to re-run the earlier tests to
make sure you hadn't regressed. If I really wanted, I could keep track
of the tests that had been run prior to such a change and offer to
rerun them automatically.)
and I can interactively inspect the stackframes and objects that
were involved in the failure.
Yup. Me too. Can you retry the test case and proceed with the rest of
your tests?
Nope, not to my knowledge (if anyone knows how to do this with pdb, I'd love
to hear about it).
Yup. That's really handy. I agree.
So, in all sincere curiosity, why did you assume that this couldn't be
done in Lisp.
Well, basically because I tried to get some reasonable debugging environment
working (with ilisp/cmucl or clisp) and failed miserably (and googling didn't
help much further either). That macros (esp. reader macros) wouldn't interact
all that well with source level debugging also didn't seem like the most
unreasonable assumption.

Any suggestions how to get something like the following to source debug with
emacs/ilisp/cmucl?

;;test-debug.lisp
(declaim (optimize (debug 3)))
(defun buggy-defun ()
(loop with bar = 10
for i below 30
collect (/ i (- i bar)))) ;<= I'd like to start stepping here in emacs
(buggy-defun)
I really am interested as I'm writing a book about Common Lisp and part of
the challenge is dealing with people's existing ideas about the language.
Good idea. I think it would also be worthwhile to have a good luck at close
competitors like python in order to do so effectively.
Feel free to email me directly if you consider that too far offtopic for
c.l.python.
-Peter
'as
Peter Seibel
2003-10-07 22:37:20 UTC
Permalink
Using parentheses and rpn everywhere makes lisp very easy to parse,
but I'd rather have something easy for me to understand and hard for
the computer to parse.
That would be a strong argument--seriously--if the only folks who
benefited from the trivial mapping between Lisp's surface syntax and
underlying were the compiler writers. I certainly agree that if by
expending some extra effort once compiler writers can save their users
effort every time they write a program that is a good trade off.

If the only thing a "regular" programmer ever does with a language's
syntax is read and write it, then the only balance to be struck is
between the perhaps conflicting goals of readability and writability.
(For instance more concise code may be more "writable" but taken to an
extreme it may be "write only".) But "machine parsability" beyond,
perhaps, being amennable to normal machine parsing techniques (LL,
LALR, etc.) should not be a consideration. So I agree with you.

But Lisp's syntax is not the way it is to make the compiler writer's
job easier. In Lisp "regular" programmers also interact with the code
as data. We can easily write code generators (i.e. macros) that are
handed a data representation of some code, or parts of code, and have
only to return a new data structure representing the generated code.
This is such a useful technique that it's built into the compiler--it
will run our code generators when it compiles our code so we don't
have to screw around figuring out how to generate code at runtime and
get it loaded into our program. *That's* why we don't mind, and, in
fact, actively like, Lisp's syntax.

The point is not that the syntax, taking in total isolation from the
rest of the language, is necessarily the best of all possible syntaxi.
The point is that the syntax makes other things possible that *way*
outweigh whatever negatives the syntax may have.

I'd humbly suggest that if you can't see *any* reason why someone
would prefer Lisp's syntax, then you're not missing some fact about
the syntax itself but about how other language features are supported
by the syntax.

-Peter
--
Peter Seibel peter at javamonkey.com

Lisp is the red pill. -- John Fraser, comp.lang.lisp
Mario S. Mommer
2003-10-08 08:50:11 UTC
Permalink
Using parentheses and rpn everywhere makes lisp very easy to parse,
but I'd rather have something easy for me to understand and hard for
the computer to parse.
Intrestingly enough, I think this is a question of getting used to
it. The notation is so relentlessly regular that once you got it,
there are no more syntactical ambiguities. None. Ever.

It is the difference (for me) between reading roman numerals (that
would be the baroque-ish syntax of other languages, full of
irregularities, special cases, and interference patterns), and arabic
numerals (that would be lisp). I never have a doubt about what the
S-expr. representation encodes.
Erann Gat
2003-10-06 18:53:55 UTC
Permalink
...
(That's why the xrange hack was invented.)
Almost right, except that xrange is a hack.
I presume you meant to say that xrange is *not* a hack. Well, hackiness
Please don't put words in my mouth, thanks. xrange _IS_ a hack,
Perhaps English is not your first langauge? When one says "Almost right,
except..." the implication is that you are disagreeing with something.
But you didn't disagree, you parroted back exactly what I said, making it
not unreasonable to assume that you inadvertantly left out the word "not".
it was introduced in Python back in the dark ages before Python had the
iterator protocol.
But it's always the dark ages. Any non-extensible langauge is going to be
missing some features, but that is usually not apparent until later. The
difference between Python and Lisp is that when a user identifies a
missing feature in Lisp all they have to do is write a macro to implement
it, whereas in Python they have no choice but to wait for the next version
to come along.
Now, clearly, _uniformity_ in the code will be to the advantage
of the team and of the project it develops.
Yes. But non-extensible languages like Python only enforce the appearance
of uniformity, they do not and cannot enforce true stylistic uniformity.
As a result there are two kinds of code, the kind that fits naturally into
the style of the language, and the kind that doesn't and has to be
shoehorned in. Of course, superficially both kinds of code sort of look
the same. But underneath code of the second sort becomes a horrible mess.
Here is the crux of our disagreement. If you believe everybody can
become a good language designer, I think the onus is on you to explain
why most languages are not designed well.
Because most langauges are designed by people who have had very little
practice designing langauges. And they've had very little practice
designing langauges because designing languages is perceived as a hard
thing to do. And if you try to do it without the right tools it is in
fact a hard thing to do. If you try to do it with the right tool (Lisp)
then it's very easy, you can do lots of iterations in a short period of
time, and gain a lot more experience about what works and what doesn't.
That's why people who use Lisp tend to be good language designers, and
people who don't tend not to be. It's also why every language feature
ever invented what invented in Lisp first.
If I thought Python's design was badly done, and a bad fit for my
problem domain, then, obviously, I would not have chosen Python (I
hardly lack vast experience in many other programming languages,
after all). Isn't this totally obvious?
No, it's not. You have taken a strong position against macros, which
means that if you ever encounter a problem domain that is not a good fit
for any language that you know then you have a problem. I don't know how
you'd go about solving that problem, but I think that a likely outcome is
that you'd try to shoehorn it in to some language that you know (and maybe
not even realize that that is what you are doing).
Therefore, clearly, your assertion that
(to adopt Python) one has "to give up all hope" of such goals is not
at all well-founded. There is nothing intrinsic to Python that can
justify it.
Actually, there is. Python's dynamicism is so extreme that efficient
native code compilation is impossible unless you change the semantics of
the language.
nuclear warheads
An inappropriate metaphor.
while in Python, where iterators are "the other way around" (they
...
Forcing you to either waste a lot of memory or write some very awkward
code.
I _BEG_ your pardon...?
Oh, right, I forgot they added the "yield" thingy. But Python didn't
always have yield, and before it had yield you were stuck.

I could come up with another example that can't be done with yield, but
your response will undoubtedly be, "Oh, that can be handled by feature FOO
which is going to be in Python 3.0" or some such thing. The point is, a
Python programmer is dependent on Guido for these features. Lisp
programmer's aren't dependent on anyone.
All this does is demonstrate how destructors can be used to emulate
unwind-protect. If you use this to implement crtical-section in the
obvious way you will find that your critical sections do not nest
properly.
I have no idea of how with-maintained-condition would find and
examine each of the steps in the body in this example; isn't
the general issue quite equivalent to the halting problem, and
thus presumably insoluble?
Only if the conditions you write are unconstrained. But there is no
reason for them to be unconstrained. The WITH-MAINTAINED-CONDITION macro
would presumably generate a compile-time error if you asked it to maintain
a condition that it didn't know how to handle.
If your claim is that macros are only worthwhile for "artificial
intelligence" code that is able, by perusing other code, to infer
(and perhaps critique?) the physical world model it is trying to
control, and modify the other code accordingly, I will not dispute
that claim.
s/only/also/
Ah, must be a mutation of the whitespace-eating nanovirus
No, auto-indent in emacs Python mode will generate indentation bugs.
Whoa there. I detect in this tirade a crucial unspoken assumption: that
One Language is necessarily going to be all I ever learn, all I ever use,
for "any programming domain I might ever choose to explore".
No, the issue is general. If you concede that for any non-extensible
langauge there are things for which that language is not well suited, then
for any finite number of such languages there will be things for which
none of those languages are well suited. At that point you only have two
choices: use an inappropriate language, or roll your own. And if you
choose to roll your own the easiest way to do that is to start with Lisp.

Of course, the same reasoning that leads you to conclude that Lisp is good
for *something* also leads you inexorably to the conclusion that Lisp is
good for *anything*, since its extensibility extends (pardon the pun) to
everything, not just features that happen not to exist in other languages
at the time.
There being no open-source, generally useful operating system kernels in
any language but C
That is indeed unfortunate. Perhaps some day the Lisp world will produce
its own Linus Torvalds.
I think I have a reasonably deep understanding of "what programming is"
My last remarks weren't addressed to you in particular, but to anyone who
might be reading this dialog. If you want (meaning if one wants) to gain
a deep understanding of how computers work, Lisp provides a better path
IMO. (And Eric Raymond thinks so too.)
I think macros (Lisp ones in particular) are a huge win in situations
in which the ability to enrich / improve / change the language has more
advantages than disadvantages. So, I think they would be a great fit
for languages which target just such situations, such as, definitely,
Perl, and perhaps also Ruby; and a net loss for languages which rely on
simplicity and uniformity, such as, definitely, Python.
That's not an unreasonable position.

E.
Raffael Cavallaro
2003-10-13 01:26:51 UTC
Permalink
In article <bmcdsd$fp8$1 at newsreader2.netcologne.de>,
Many programming languages require you to build a model upfront, on
paper or at least in your head, and then write it down as source code.
This is especially one of the downsides of OOP - you need to build a
class hierarchy very early on without actually knowing if it is going to
work in the long run.
This parallels Paul Graham's critique of the whole idea of program
"specifications." To paraphrase Graham,for any non-trivial software,
there is no such thing as a specification. For a specification to be
precise enough that programmers can convert it directly into code, it
must already be a working program! What specifications are in reality is
a direction in which programmers must explore, finding in the process
what doesn't work and what does, and how, precisely, to implement that.

Once you've realized that there is really no such thing as the waterfall
method, it follows inevitably that you'll prefer bottom up program
development by exploratory methods. Once you realize that programs are
discovered, not constructed from a blueprint, you'll inevitably prefer a
language that gives you freedom of movement in all directions, a
language that makes it difficult to paint yourself into a corner.
Erann Gat
2003-10-13 21:26:53 UTC
Permalink
Not only did Erann refuse to reciprocate Alex Martelli's ad hominem
He earlier posted my assertions were "mind-boggling"
No, I didn't, as a quick Google search will verify. (Notwithstanding, I
don't consider "mind-boggling" to be a particularly pejorative term.)
, then, without any apology whatsoever, that he found them reasonable.
I'm pretty sure I didn't do that either, because I don't.
So it's not
a matter of E.G. "reciprocating attacks", but, rather, INITIATING
them (and weaving and ducking to admit any responsibility whatsoever).
Actually it appears to be a matter of you making things up out of whole
cloth (not uncommon on Usenet I'm afraid).

E.
Lulu of the Lotus-Eaters
2003-10-07 23:36:18 UTC
Permalink
| 3- what do you mean "printed"? A double-click on any parenthesis
| selects the enclosed list so it's quite easy to see what it encloses.

I kept clicking on the parenthesis in all your samples. All my
newsreader did was go to text-select mode!

Btw. I believe the word "printed" means "printed"... you've seen that
stuff where ink is applied to paper, yes?

Yours, Lulu...

--
---[ to our friends at TLAs (spread the word) ]--------------------------
Echelon North Korea Nazi cracking spy smuggle Columbia fissionable Stego
White Water strategic Clinton Delta Force militia TEMPEST Libya Mossad
---[ Postmodern Enterprises <mertz at gnosis.cx> ]--------------------------
Pascal Costanza
2003-10-12 15:43:47 UTC
Permalink
In retrospect I should have given a more obvious possibility.
As some point I hope to have computer systems I can program
by voice in English, as in "House? Could you wake me up
at 7?" That is definitely a type of programming, but Lisp is
a language designed for text, not speed.
I don't understand that last sentence. Could you clarify this a bit? You
don't want to say that there is an inherent dichotomy between text and
speed, do you?!?
I believe it is an accepted fact that uniformity in GUI design is a good
thing because users don't need to learn arbitrarily different ways of
using different programs. You only need different ways of interaction
when a program actually requires it for its specific domain.
My spreadsheet program looks different from my word processor
looks different from my chemical structure editor looks different from
my biosequence display program looks different from my image
editor looks different from my MP3 player looks different from my
email reader looks different from Return to Castle Wolfinstein ....
There are a few bits of commonality; they can all open files. But
not much more.
...but you probably know from the start where to find the menus, what
the shortcuts are for opening and saving files, how to find the online
help, and so forth.

Lisp also has this to a certain degree: It's always clear what
constitutes the meaning of an s-expression, namely its car, no matter
what language "paradigm" you are currently using.
Toss out the MP3 player and RtCW and there
is more in common. Still, the phrase "practicality beats purity" is
seems appropriate here.
I firmly believe people can in general easily handle much more
complicated syntax than Lisp has. There's plenty of room to
spare in people's heads for this subject.
Sure, but is it worth it?
Do you have any doubt to my answer? :)
No, not really. :)
Convenience is what matters. If you are able to conveniently express
solutions for hard problems, then you win. In the long run, it doesn't
matter much how things behave in the background, only at first.
Personally, I would love to write equations on a screen like I
would on paper, with integral signs, radicals, powers, etc. and
not have to change my notation to meet the limitations of computer
input systems.
I know people who have even started to use s-expression for mathematical
notation (on paper), because they find it more convenient.
For Lisp is a language tuned to keyboard input and not the full
range of human expression. (As with speech.)
There is some research going on to extend Lisp even in this regard
(incorporating more ways of expression).
(I know, there are people who can write equations in TeX as
fast as they can on paper. But I'm talking about lazy ol' me
who wants the covenience.)
Or, will there ever be a computer/robot combination I can
teach to dance? Will I do so in Lisp?
?!?
It seems to me that in Python, just as in most other languages, you
always have to be aware that you are dealing with classes and objects.
Why should one care? Why does the language force me to see that when it
really doesn't contribute to the solution?
Hmmm.. Is the number '1' an object? Is a function an object?
What about a module? A list? A class?
print sum(range(100))
4950
Where in that example are you aware that you are dealing with classes
and objects?
Well, maybe I am wrong. However, in a recent example, a unit test
expressed in Python apparently needed to say something like
"self.assertEqual ...". Who is this "self", and what does it have to do
with testing? ;)
If it's only a syntactical issue, then it's a safe bet that you can add
that to the language. Syntax is boring.
Umm... Sure. C++ can be expressed as a parse tree, and that
parse tree converted to an s-exp, which can be claimed to be
a Lisp; perhaps with the right set of macros.
That's computational equivalence, and that's not interesting.
Which is why I didn't see the point of original statement. My
conjecture is that additional syntax can make some things easier.
That a problem can be solved without new syntax does not
contradict my conjecture.
If additional syntax makes specific things easier, then in god's name
just add it! The loop macro in Common Lisp is an example of how you can
add syntax to make certain things easier. This is not rocket science.

The point here is that for most languages, if you want to add some
syntax, you have to change the definition of the language, extend the
grammar, write a parser, extended a compiler and/or interpreter, maybe
even the internal bytecode representation, have wars with other users of
the language whether it's a good idea to change the language that way,
and so forth. In Lisp, you just write a bunch of macros and you're done.
No problems with syntax except if you want them, most of the time no
problems with changes to the language (far less than in other
languages), no messing around with grammars and related tools, no need
to know about compiler/interpreter internals and internal
representation, no wars with other language users, and so forth.

Syntax is boring. ;)
(with-allocation-from :shared-memory
...)
;)
Any more questions?
Yes. Got a URL for documentation on a Lisp providing access
to shared memory?
OK, I am sorry that I have lost focus here. You have given this example
as one that shows what probably cannot not be done in Lisp out of the
box. However, most Lisp implementations provide a way to access native
code and in that way deal with specific features of the operating
system. And there is a de-facto standard for so-called foreign function
calls called UFFI that you can use if you are interested in a
considerable degree of portability.

I don't know a lot about the specifics of shared memory, so I can't
comment on your specific questions.
This service is straight-forward to support in C/C++. It
sounds like for Lisp you are dependent on the implementation,
in that if the implementation doesn't support access to its
memory allocator/gc subsystem then it's very hard to
write code for this hardware on your own. It may be
possible to use an extension (written in C? ;) to read/write
to that persistent memory using some sort of serialization,
but that's the best you can do -- you don't have live objects
running from nonvolatile store -- which is worse than C++.
This should be possible as a combination of a FFI/UFFI and the CLOS MOP.
AFAIK, you can define the memory layout and the allocation of memory for
specific metaclasses. However, I really don't know the details.

The paper at
http://www-db.stanford.edu/~paepcke/shared-documents/mopintro.ps might
be interesting. For UFFI, see http://uffi.b9.com/

As Paul Graham put it, yes, there is some advantage when you use the
language the operating system is developed in, or it ++.

Pascal
Edi Weitz
2003-10-09 20:54:43 UTC
Permalink
Is there a free Lisp/Scheme implementation I can experiment with
which include in the distribution (without downloading extra
- unicode
- xml processing (to some structure which I can use XPath on)
- HTTP-1.1 (client and server)
- URI processing, including support for opening and reading from
- regular expressions on both 8-bit bytes and unicode
- XML-RPC
- calling "external" applications (like system and popen do for C)
- POP3 and mailbox processing
Yes. Allegro CL (ACL) for one.
As far as I can tell, there isn't. I'll need to mix and match
packages
You obviously can't "tell" too well.
It is true that AllegroCL has all these features and it probably is
the only CL implementation that includes all of them out of the box
but it is not true that it is free (which was one of the things
Mr. Dalke asked for). At least it wasn't true the last time I talked
to the Franz guys some days ago. If that has changed in the last week
please let me know... :)

You might be able to get most of these features with "free" CL
implementations but not all at once I think. (AFAIK CLISP is currently
the only "free" CL which supports Unicode but it is lacking in some
other areas.)

As far as "mix and match" of packages is concerned: Use Debian
(testing) or Gentoo. I've been told it's just a matter of some
invocations of 'apt-get install' or 'emerge' to get the CL packages
you want. At least it shouldn't be harder than, say, getting stuff
from CPAN. What? You don't use Debian or Gentoo? Hey, you said you
wanted "free" stuff - you get what you pay for.

No, seriously. It'd definitely be better (for us Lispers) if we had
more freely available libraries plus a standardized installation
procedure ? la CPAN. Currently we don't have that - there are
obviously far more people working on Perl or Python libraries.

So, here are your choices:

1. Buy a commercial Lisp. I've done that and I think it was a good
decision.

2. Try to improve the situation of the free CL implementations by
writing libraries or helping with the infrastructure. That's how
this "Open Source" thingy is supposed to work. I'm also doing this.

3. Run around complaining that you can't use Lisp because a certain
combination of features is not available for free. We have far too
many of these guys on c.l.l.

4. Just don't use it. That's fine with me.

It currently looks like the number of people choosing #2 is
increasing. Looks promising. You are invited to take part - it's a
great language and a nice little community... :)

Edi.

PS: You might also want to look at

<http://web.metacircles.com/cirCLe+CD>.
unknown
2003-10-03 16:21:07 UTC
Permalink
Implementing Python-like syntax in LISP or even a full Python
implementation in LISP would be a worthwhile goal (LPython?). BTW, this
kind of implementation is one of the relatively few programming tasks
that can benefit greatly from macros. The Python semantics can be mostly
done using macros and a preprocessor would translated Python syntax to
s-expression code that uses those macros.
If done straightforwardly, the language semantics would end up
different from Python's in a few ways. For example, strings in most
Lisp systems are mutable. Also, Python's class system doesn't map
onto Lisp all that well; Python's variable-scoping rules are bizarre,
and so forth. I think the result would still be worth doing but
it would be a further departure from CPython than, say, Jython is.
Hans Nowak
2003-10-07 01:20:24 UTC
Permalink
[...] Fortunately, in either case, the rich
and stong suite of unit tests that surely accompanies such a
"mission-critical application" will easily detect the nefarious
deed and thus easily defang the nanoviruses' (nanovirii's? nanovirorum...?)
menace, [...]
Hmm, if I recall correctly, in Latin the plural of 'virus' is 'virus'. (I like
to write 'virii' myself, knowing that it's wrong, but it looks cute. :-)
--
Hans (hans at zephyrfalcon.org)
http://zephyrfalcon.org/
james anderson
2003-10-07 16:54:41 UTC
Permalink
...
Two words: code duplication.
...
Three words and a hyphen: Higher-Order Functions.
Most of the things that macros can do can be done with HOFs with just
as little source code duplication as with macros. (And with macros
only the source code does not get duplicated, the same not being true
for compiled code. With HOFs even executable code duplication is
often avoided -- depending on compiler technology.)
is the no advantage to being able to do either - or both - as the occasion dictates?

i'd be interested to read examples of things which are better done with HOF
features which are not available in CL. sort of the flip-side to the example
of compile-time calculation and code generation. taking into account that
generic functions are, at least to some extent, the equivalent of value-domain macro-expansion.

?
Hartmann Schaffer
2003-10-16 03:19:39 UTC
Permalink
In article <raffaelcavallaro-D3CEF4.18102214102003 at netnews.attbi.com>,
A programmer accustomed to the
functional style finds the need in non-FP languages to name every
function analogously awkward.
No one is talking about need, but about clarity of exposition.
which is essentially a style question, which usually don't have
absolute answers
It is perfectly possible to program functionally in lisp, as I'm sure
you know. It just makes code less readable to use _anonymous_ functions.
Code is no less functional when the functions are named.
this would largely depend on the function. e.g. i don't see any
benefit in adding function definitions to add 2 or multiply by 3
instead of using anonymous functions

(defun add2 (x) (+ x 2))

... many lines / pages/ screenloads of code

(map 'list #'add2 list)

doesn't look any more expositive to me than

(map 'list (lambda (x) (+ x 2)) list),

quite to the contrary. there are enough cases where you need
something like this. obviously, you can abuse this
...
Anonymous functions force the _how_ to be interleaved with the _what_,
breaking up the clarity of the _what_. Named functions (and macros)
allow the high level abstractions to be expressed in terms of _what_ is
happening, without unnecessary reference to _how_.
Anonymous functions force the reader to deal with _how_ precisely
because there is no descriptive name that expresses _what_ the funtion
does. This is an inappropriate conflation of two distinct purposes, that
can and should be separated in source code.
the problem with your position is that you make a dogma out of a
generally useful observation

hs
--
ceterum censeo SCO esse delendam
Andrew Dalke
2003-10-09 20:19:09 UTC
Permalink
I have this personal theory (used in the non-strict sense here) that
given enough time any homogenous group will split into at least two
competing factions.
Reminds me of Olaf Stapledon's "First and Last Men"? His
civilizations often had two roughtly equal but opposing components.

Also reminds me of learning about the blue eyed/brown eyed
experiment in my sociology class in high school. As it turns out,
I was the only blue-eyed person in the class of 25 or so. :)
Over time it seems to me that human
beings are incapable of remaining as one single cohesive group, rather
that they will always separate into several competing factions. Or at
the very least groups will splinter off the main group and form their
own group.
Not necessarily "competing", except in a very general sense. Is
Australian English in competition with Canadian English?
However in the opensource world I expect splinters to happen frequently,
simply because there is little to no organizational control. Even
Python hasn't been immune to this phenomenon with both Jython and
Stackless emerging.
As well as PyPy and (more esoterically) Vyper.

Excepting the last, all have had the goal of supporting the C Python
standard library where reasonably possible. When not possible
(as the case with Jython and various C extensions), then supporting
the native Java libraries.
"bristly" ;)
Ohh! Good word! I had forgotten about it.

Andrew
dalke at dalkescientific.com
Bruce Lewis
2003-10-10 13:28:51 UTC
Permalink
If your problems are trivial, I suppose the presumed lower startup
costs of Python may mark it as a good solution medium.
I find no significant difference in startup time between python and
mzscheme.
Raffael Cavallaro
2003-10-20 23:20:12 UTC
Permalink
In article <oprxcymd2n3seq94 at news.nscp.aoltw.net>,
I have in a mail-processing application 2 functions IF-FROM-LINE (used in
mbox file processing) and RFC822-COLLAPSE-HEADER (used in RFC822 header
processing) which both implement algorithms which are parameterized
by functions. This is vaguely similiar in spirit to the visitor pattern
from OO land, but much more flexible. Both of these functions are used
in multiple contexts where the anonymous functions contextualize the
operations performed under specific conditions in their implemented
algorithms. These operations have (so far) been strictly one-off animals.
I think you are in violent agreement with me.
The problem I see with the use of
the typical anonymous functional
^^^^^^^^^
1. If-from-line is a _named_ function, not an anonymous function. My
only objection was to _anonymous_ functions replacing named
abstractions, not to functional programming itself.
In the event that I ever feel a need to re-use one of them I will simply
lift the anonymous function from its original source location, give it
a top-level name et voila - instant reuse.
2. Which is precisely what I suggested in all of my previous posts.
I.e., if the anonymous function is used more than once, name it, and use
the name.
Kenny Tilton
2003-10-03 13:33:25 UTC
Permalink
just me, I prefer S-exps and there seems to be a rebirth in the Scheme
and Common Lisp communities at the moment. Ironically this seems to
have been helped by python. I learned python then got interested in
it's functional side and ended up learning Scheme and Common Lisp.
It's be interesting to know where people got the idea of learning
Scheme/LISP from (apart from compulsory university courses)?
<g> We wonder alike. That's why I started:

http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey

That recently got repotted from another cliki and it's a little mangled,
but until after ILC2003 I am a little too swamped to clean it up. But
there is still a lot of good stuff in there. On this page I grouped
folks according to different routes to Lisp (in the broadest sense of
that term): http://alu.cliki.net/The%20RtLS%20by%20Road

You will find some old-timers because I made the survey super-inclusive,
but my real interest was the same as yours: where are the New Lispniks
coming from?

Speaking of which, Mark Brady cited Python as a stepping-stone, and I
have been thinking that might happen, but the survey has yet to confirm.
Here's one: http://alu.cliki.net/Robbie%20Sedgewick's%20Road%20to%20Lisp

So Ping! Mark Brady, please hie ye (and all the others who followed the
same road to Lisp) to the survey and correct the record.

I think
that for me, it was the LPC language used in LPmuds. It had a
frightening feature called lambda closures, and useful functions such
as map and filter. Then one day I just decided to bite the bullet and
find out where the heck all that stuff came from (my background was
strongly in C-like languages at that point. LPC is like C with some
object-oriented and some FP features.)
Yes, I know, there's nothing frightening in lambda closures. But the
way they were implemented in LPC (actually just the syntax) was
terrible :)
You could cut and paste that into the survey as well. :)

kenny
Mario S. Mommer
2003-10-04 19:13:43 UTC
Permalink
I have tried on 3 occassions to become a LISP programmer, based upon
the constant touting of LISP as a more powerful language and that
ultimately S-exprs are a better syntax. Each time, I have been
stopped because the S-expr syntax makes we want to vomit.
:-)

Although people are right when they say that S-exprs are simpler, and
once you get used to them they are actually easier to read, I think
the visual impact they have on those not used to it is often
underestimated.

And to be honest, trying to deal with all these parenthesis in an
editor which doesn't help you is not an encouraging experience, to say
the least. You need at least a paren-matching editor, and it is a real
big plus if it also can reindent your code properly. Then, very much
like in python, the indent level tells you exactly what is happening,
and you pretty much don't see the parens anymore.

Try it! In emacs, or Xemacs, open a file ending in .lisp and
copy/paste this into it:

;; Split a string at whitespace.
(defun splitatspc (str)
(labels ((whitespace-p (c)
(find c '(#\Space #\Tab #\Newline))))
(let* ((posnew -1)
(posold 0)
(buf (cons nil nil))
(ptr buf))
(loop while (and posnew (< posnew (length str))) do
(setf posold (+ 1 posnew))
(setf posnew (position-if #'whitespace-p str
:start posold))
(let ((item (subseq str posold posnew)))
(when (< 0 (length item))
(setf (cdr ptr) (list item))
(setf ptr (cdr ptr)))))
(cdr buf))))

Now place the cursor on the paren just in front of the defun in the
first line, and hit ESC followed by <ctrl-Q>.
If a set of macros could be written to improve LISP syntax, then I
think that might be an amazing thing. An interesting question to me
is why hasn't this already been done.
Because they are so damned regular. After some time you do not even
think about the syntax anymore.
Andrew Dalke
2003-10-09 19:35:10 UTC
Permalink
Is portability of code across different language implementations not a
priority
for LISP programmers?
there are some things which the standard does not cover.
"The standard" as I understand it, is some document written a decade
ago. Can't a few of you all get together and say "hey, the world's
advanced. Let's agree upon a few new libraries and APIs."?


Andrew
dalke at dalkescientific.com
Jon S. Anthony
2003-10-09 21:30:58 UTC
Permalink
This thing has been debunked for years. No one with a clue takes it
seriously. Even the author(s) indicate that much of it is based on
subjective guesses.
Do you have a reference? And a pointer to something better?
Even nothing is better than misinformation.
2 of 6 is better than random, so Jones' work can't be
complete bunkum.
2 of 6 is worse than flipping a coin.

/Jon
Frode Vatvedt Fjeld
2003-10-03 22:22:47 UTC
Permalink
[..] However, from an earlier post on comp.lang.python comparing a
simple loop.
Scheme
(define vector-fill!
(lambda (v x)
(let ((n (vector-length v)))
(do ((i 0 (+ i 1)))
((= i n))
(vector-set! v i x)))))
Python
v[i] = x
To me the Python code is easier to read, and I can't possibly fathom
how somebody could think the Scheme code is easier to read. It truly
boggles my mind. [..]
The scheme example can only have been written by someone who is on the
outset determined to demonstrate that sexp-syntax is complicated. This
is how I'd write it in Common Lisp:

(defun vector-fill (v x)
(dotimes (i (length v))
(setf (aref v i) x)))

As you can see, it matches the python example quite closely.
[..] If a set of macros could be written to improve LISP syntax,
then I think that might be an amazing thing. An interesting
question to me is why hasn't this already been done.
Lisp macros and syntactic abstractions are one of those things whose
power and elegance it is somewhat hard to explain to those who have
not experienced it themselves first hand. Paul Graham's book "On Lisp"
is considered by many to be a good introduction to the subject.

I am quite comforatble with Common Lisp's syntax, and I see no
particular need for some set of macros to improve its syntax. In fact
I have no idea what so ever as to what such a set of macros would look
like.
--
Frode Vatvedt Fjeld
Mark Wilson
2003-10-13 04:13:18 UTC
Permalink
...
3. Python's syntax is one of the worst features of the language and
Ha.
How cogent.
should not be adopted by Lisp and Scheme.
Obviously not, as it would not fit in with all the rest of their
features.
That idea was the point of the post that started this thread/
Joe Marshall's analysis is establishes this. There have been no
"is establishes"? Is that a new variant on "all your bases are belong
to us"?
Typographical error.
responses to Joe Marshall's analysis that successfully refute his
analysis.
I guess I haven't been posting ENOUGH on this thread -- I don't
recall the "is establishes" analysis in question, just the usual FUD
about "copy and paste errors" and other variants on whitespace-
eating nanoviruses -- I tore a few to shreds, but it was almost
incidental to all the rest of the volume.
Check on the responses from Joe Marshall (who also has an email account
called prunesquallor, or something like that. The only person to engage
his arguments in this thread was the original poster. I would find it
enlightening to read a cogent response to his analysis, although I
doubt that one can in fact be maintained.
4. The productivity of the prolific posters must have precipitously
phaded.
True in my case -- the amount of FUD and insults posted and
demanding response being so high -- as seen above, even so I
may not have noticed all of the "is establishes" alleged ``analyses''
which sufficiently clueless readers (and you seem to be successfully
posing as one) may think "irrefutable" unless one tediously, over
and over, cuts them to confetti-size shreds and throws back
into their proponents' faces.
The above is really uncalled for and beneath a person of your purported
stature. It makes me wonder if I was wrong to think highly of you.
Confronting Lisp may have had a deleterious effect on your thinking. As
to me being clueless, that may be true, although I have my doubts.
Clear cogent arguments advancing your views might provide me with a
clue.
I guess I'm just about ready to drop
off this interminable thread, except presumably for whatever
further dismantling of insults, FUD and disinformation I just
can't resist.
Please resist.
5. Use Ruby, be happy.
I earnestly hope you'll start a new cross-thread between c.l.lisp and
c.l.ruby about first-class functions, macros, case sensitivity, regular
expressions as inherently embedded in the language, and whatever
else can most enflame them -- please leave c.l.ruby out of it, tx.
I have nothing against Lisp or Scheme (I'm learning both). I have
nothing against Ruby either (Ruby is my favorite language and the first
one I started learning). It has the advantage of not considering other
programming languages a waste of time. I have never heard a Ruby
programmer complain about others "wasting" their efforts on other
programming languages. I have heard such talk from Python people. So
what is it with the Python people?
Alex
Regards,

Mark Wilson
Ingvar Mattsson
2003-10-09 09:31:51 UTC
Permalink
"Karl A. Krueger" <kkrueger at example.edu> writes:

[SNIP]
Incidentally, I regard objections to "the whitespace thing" in Python
and objections to "the parenthesis thing" in Lisp as more or less the
same. People who raise these objections are usually just saying "Ick!
This looks so unfamiliar to me!" in the language of rationalizations.
I guess a philosopher would say that I am an emotivist about notation
criticisms.
My main problem with "indentation controls scoping" is taht I've
actually had production python code die because of whitespace being
mangled in cutting&pasting between various things. It looks a bit odd,
but after having written BASIC, Pascal, APL, Forth, PostScript, Lisp,
C and Intercal looking "odd" only requires looking harder. Killing a
production system due to whitespace-mangling isn't.

And, yes, I probably write more Python code than lisp code in an
average week.

//Ingvar
--
Ingvar Mattsson; ingvar at hexapodia.net;
You can get further with a kind word and a 2x4
than you can with just a kind word. Among others, Marcus Cole
Andrew Dalke
2003-10-06 08:09:24 UTC
Permalink
[cc'ed since I wasn't sure if you would be tracking the c.l.py thread]
A program is a stream of tokens, which may be separated by whitespace.
The sequence { (zero or more statements) } is a statement.
Some C tokens may be separated by whitespace and some *must* be
separated by whitespace.

static const int i
static const inti

i + + + 1
i ++ + 1

The last case is ambiguous, so the tokenizer has some logic to
handle that -- specifically, a greedy match with no backtracking.
It throws away the ignorable whitespace and gives a stream of
tokens to the parser.
What's the equivalent for Python?
One definition is that "a program is a stream of tokens, some
of which may be separated by whitespace and some which
must be separated by whitespace." Ie, the same as my
reinterpretation of your C definition.

For a real answer, start with

http://python.org/doc/current/ref/line-structure.html
"A Python program is divided into a number of logical lines."

http://python.org/doc/current/ref/logical.html
"The end of a logical line is represented by the token NEWLINE.
Statements cannot cross logical line boundaries except where
NEWLINE is allowed by the syntax (e.g., between statements in
compound statements). A logical line is constructed from one or
more physical lines by following the explicit or implicit line joining
rules."

http://python.org/doc/current/ref/physical.html
"A physical line ends in whatever the current platform's convention
is for terminating lines. On Unix, this is the ASCII LF (linefeed)
character. On Windows, it is the ASCII sequence CR LF (return
followed by linefeed). On Macintosh, it is the ASCII CR (return)
character."

and so on.
Except that 'if', 'while' etc lines are terminated with delimiters
rather than newline. Oh, and doesn't Python have the option to use \
or somesuch to continue a regular line?
The C tokenizer turns the delimiter character into a token.

The Python tokenizer turns indentation level changes into
INDENT and DEDENT tokens. Thus, the Python parser just
gets a stream of tokens. I don't see a deep difference here.

Both tokenizers need to know enough about the respective
language to generate the appropriate tokens.
- If the indentation is buggered up, the brackets provide the
information you need to figure out what the indentation should have
been.
As I pointed out, one of the pitfalls which does occur in C
is the dangling else

if (a)
if (b)
c++;
else /* indented incorrectly but valid */
c--

That mistake does not occur in Python. I personally had
C++ code with a mistake based on indentation. I and
three other people spent perhaps 10-15 hours spread
over a year to track it down. We all knew where the bug
was supposed to be in the code, but the indentation threw
us off.
- The whole tabs vs spaces issue doesn't arise.
That's an issue these days? It's well resolved -- don't
use tabs.

And you know, I can't recall a case where it's every
been a serious problem for me. I have a couple of times
had a problem, but never such that the code actually
worked, unlike the if/else code I listed above for C.

Andrew
dalke at dalkescientific.com
prunesquallor
2003-10-14 07:19:39 UTC
Permalink
Note that Lisp and Scheme have a quite unpleasant anonymous function
syntax, which induces a stronger tension to macros than in e.g. Ruby or
Haskell.
Good grief!

Unpleasant is the inner classes needed to emulate anonymous functions
in Java.
Alex Martelli
2003-10-10 13:24:06 UTC
Permalink
Bj?rn Lindberg wrote:
...
Agreed. I pointed out elsewhere that there has been no systematic
study to show that Lisp code is indeed "so much shorter than the
equivalent code in other languages" where "other languages" include
Python, Perl, or Ruby.
It would be interesting to see such studies made.
Absolutely! But funding such studies would seem hard. Unless some
company or group of volunteers had their own reasons to take some
existing large app coded in Lisp/Python/Perl/Ruby, and recode it in
one of the other languages with essentially unchanged functionality,
which doesn't seem all that likely. And if it happened, whatever
group felt disappointed in the results would easily find a zillion
methodological flaws to prove that the results they dislike should
be ignored, nay, reversed.

In practice, such a re-coding would likely involve significant
changes in functionality, making direct comparisons iffy, I fear.

I know (mostly by hearsay) of some C++/Java conversions done
within companies (C++ -> Java for portability, Java -> C++ for
performance) with strong constraints on functionality being "just
the same" between the two versions (and while that's far from
a "scientific result", a curious pattern seems to emerge: going
from C++ to Java seems to produce the same LOC's, apparently a
disappointment for some; going from Java to C++ seems to expand
LOC's by 10%/20%, ditto -- but how's one to say if the C++ code
had properly exploited the full macro-like power of templates,
for example...?). But I don't even have hearsay about any such
efforts between different higher-level languages (nothing beyond
e.g. a paltry few thousand lines of Perl being recoded to Python
and resulting in basically the same LOC's; or PHP->Python similarly,
if PHP can count as such a language, perhaps in a restricted context).
In any case, it implies you need to get to some serious sized
programs (1000 LOC? 10000LOC? A million?) before
the advantages of Lisp appear to be significant.
I think that goes for any advantages due to abstraction capabilities
of macros, OO or HOF. The small program in the study above seems to
capture the scripting languages higher level compared to the
close-to-the-machine languages C & C++. (I have not read all of it
though.) To show advantages of the abstraction facilities we have been
discussing in this thread, I believe much larger programs are needed.
Yes, and perhaps to show advantages of one such abstraction facility
(say macros) wrt another (say HOFs) would require yet another jump up
in application size, if it could be done at all. Unless some great
benefactors with a few megabucks to wast^H^H^H^H invest for the general
benefit of humanity really feel like spending them in funding such
studies, I strongly suspect they're never really going to happen:-(.


Alex
Kaz Kylheku
2003-10-13 20:51:22 UTC
Permalink
Well, no, not really. You can define new syntactic forms in terms of
old ones, and the evaluation rules end up being determined by those of
the old ones. Again, with HOFs you can always get the same
effect -- at the expense of an extra lambda here and there in your
source code.
A macro can control optimization: whether or not something is achieved
by that extra lambda, or by some open coding.

In the worst cases, the HOF solution would require the user to
completely obfuscate the code with explicitly-coded lambdas. The code
would be unmaintainable.

Secondly, it would be unoptimizeable. The result of evaluating a
lambda expression is an opaque function object. It can only be called.

Consider the task of embedding one programming language into another
in a seamless way. I want to be able to write utterances in one
programming language in the middle of another. At the same time, I
want seamless integration between them right down to the lexical
level. For example, the embedded language should be able to refer to
an outer variable defined in the host language.

HOF's are okay if the embedded language is just some simple construct
that controls the evaluation of coarse-grained chunks of the host
language. It's not too much of an inconvenience to turn a few
coarse-grained chunks into lambdas.

But what if the parameters to the macro are not at all chunks of the
source language but completely new syntax? What if that syntax
contains only microscopic utterances of the host language, such as the
mentions of the names of variables bound in surrounding host language?

You can't put a lambda around the big construct, because it's not even
written in the host language! So what do you do? You can use an escape
hatch to code all the individual little references as host-language
lambdas, and pepper these into the embedded language utterance. For
variables that are both read and written, you need a reader and writer
lambda. Now you have a tossed salad. And what's worse, you have poor
optimization. The compiler for the embedded language has to work with
these lambdas which it cannot crack open. It can't just spit out code
that is integrated into the host language compile, where references
can be resolved directly.
This can only be accomplished with functions if you're
willing to write a set of functions that defer evaluation, by, say
parsing input, massaging it appropriately, and then passing it to the
compiler. At that point, however, you've just written your own macro
system, and invoked Greenspun's 10th Law.
This is false. Writing your own macro expander is not necessary for
getting the effect. The only thing that macros give you in this
regard is the ability to hide the lambda-suspensions.
That's like saying that a higher level language gives you the ability
to hide machine instructions. But there is no single unique
instruction sequence that corresponds to the higher level utterance.

Macros not only hide lambdas, but they hide the implementation choice
whether or not lambdas are used, and how! It may be possible to
compile the program in different ways, with different choices.

Moreover, there might be so many lambda closures involved that writing
them by hand may destroy the clarity of expression and maintainability
of the code.
To some people
this is more of a disadvantage than an advantage because, when not
done in a very carefully controlled manner, it ends up obscuring the
logic of the code. (Yes, yes, yes, now someone will jump in an tell
me that it can make code less obscure by "canning" certain common
idioms. True, but only when not overdone.)
Functions can obscure in the same ways as macros. You have no choice.
Large programs are written by delegating details elsewhere so that a
concise expression can be obtained.

You can no more readily understand some terse code that consists
mostly of calls to unfamiliar functions than you can understand some
terse code written in an embedded language build on unfamiliar macros.

All languages ultimately depend on macros, even those functional
languages that don't have user-defined macros. They still have a whole
bunch of syntax. You can't define a higher order function if you don't
have a compiler which recognizes the higher-order-function-defining
syntax, and that syntax is nothing more than a macro that is built
into the compiler which captures the idioms of programming with higher
order functions!

All higher level languages are based on syntax which captures idioms,
and this is nothing more than macro processing.
Andrew Dalke
2003-10-07 05:41:52 UTC
Permalink
a = 2
a = 1 + 1
a = math.sqrt(4)
a = int((sys.maxint+1) ** (1/31))
...all mean the same thing.
Be careful with that. The first two return the integer 4, the third returns
the floating point number 4.0 and the last returns 1 because 1/31 is
0 (unless you are using true division). Even if you use 1/31. you'll
get a different value on a 32 bit machine vs. a 64 bit machine.

pedantically yours,

Andrew
dalke at dalkescientific.com
Andrew Dalke
2003-10-09 18:50:46 UTC
Permalink
Because the present is composed of the past. You have to be
compatible, otherwise you could not debug a Deep Space 1 probe
160 million km away, (and this one was only two or three years old).
Huh? I'm talking purely in the interface. Use ASCII '[' and ']' in the
Lisp code and display it locally as something with more "directionality".
I'm not suggesting the unicode character be used in the Lisp code.
Take advantages of advances in font display to overcome limitations
in ASCII.
Mathematicians indeed overload operators with taking into account
their precise properties. But mathematicians are naturally
intelligent. Computers and our programs are not. So it's easier if
you classify operators per properties; if you map the semantics to the
syntax, this allow you to apply transformations on your programs based
on the syntax without having to recover the meaning.
Ahhh, so make the language easier for computers to understand and
harder for intelligent users to use? ;)

Andrew
dalke at dalkescientific.com
Rob Warnock
2003-10-10 12:26:11 UTC
Permalink
Andrew Dalke <adalke at mindspring.com> wrote:
+---------------
| (and yes, I know about the lawsuit against disk drive manufacturors
| and their strange definition of "gigabyte"... )
+---------------

Oh, you mean the fact that they use the *STANDARD* international
scientific/engineering notation for powers of 10 instead of the
broken, never-quite-right-except-in-a-few-cases pseudo-binary
powers of 10?!?!? [Hmmm... Guess you can tell which side of *that*
debate I'm on, eh?] The "when I write powers of 10 which are 3*N
just *asssume* that I meant powers of 2 which are 10*N" hack simply
fails to work correctly when *some* of the "powers of 10" are *really*
powers of 10. It also fails to work correctly with things that aren't
instrinsically quantized in powers of 2 at all.

Examples: I've had to grab people by the scruff of the neck and push
their faces into the applicable reference texts before they believe me
when I say that gigabit Ethernet really, really *is* 1000000000.0 bits
per second [peak payload, not encoded rate], not 1073741824, and that
64 kb/s DS0 telephone circuits really *are* 64,000.0 bits/sec, not 65536.
[And, yes, 56 kb/s circuits are 56000 bits/sec, not 57344.]

Solution: *Always* use the internationally-recognized binary prefixes
<URL:http://physics.nist.gov/cuu/Units/binary.html> when that's really
what you mean, and leave the old scientific/engineering notation alone,
as pure powers of 10. [Note: The historical notes on that page are well
worth reading.]


-Rob

p.s. If you're hot to file a lawsuit, go after the Infiniband Trade
Association for its repeated claims that 4x IB is "10 Gb/s". It isn't,
it's 8 Gb/s [peak user payload rate, not encoded rate]. Go read the
IBA spec if you don't believe me; it's right there.

-----
Rob Warnock <rpw3 at rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607

Continue reading on narkive:
Loading...