Discussion:
PEP 318
Jess Austin
2004-03-24 05:03:11 UTC
Permalink
def __foo_method(...)
....
bar = decorator1(__foo_method)
baz = decorator2(__foo_method, arg1, arg2)
I can't imagine the exact use for this, but I can imagine that there
could be a use. If the syntax remains as it is, that is. This PEP
seems to shoot itself in the foot in this respect.
The syntax in PEP318 is syntactic sugar for the most common use of
function decorators, the current syntax would still be supported.
That's good to hear. Should this be made explicit in the PEP?
Joe Mason
2004-03-24 17:01:58 UTC
Permalink
Using the proposed PEP 318 semantics, at the time decorator(<function>) is
called, it will be passed a function object but "__mul__" will not have been
seen yet. However, the function object's func_name field should have been
filled in. That should give it enough information to build a new function
Oh, good. That's what I proposed in another post - I guess I didn't
read the PEP closely enough to realize it was already like this. (It's
a bit weird to have a function with a func_name filled in, but actually
looking up that func_name will return something else or nothing! But
there would never be a reason to do that when you already have the
function object, so it's a useful asymmetry.)
type names. That presents another problem. Classes, unlike functions,
don't have name attributes, so mapping the names passed to multimethod() to
Perhaps they should. (But that's another PEP!)
It certainly seems doable, but the end result doesn't seem all that pretty.
(Why not just "class Matrix: _name = "Matrix" ..."? "attributes" looks
a little wordy to me. Just a question of style, but I want to know if
there's a technical reason I'm missing.)
...
return result
...
return result
...
return result
I think my preferred way to do this would be just

class MatrixBase:
pass

class Matrix(MatrixBase):
def __mul__(self, other) [multimethod(MatrixBase, MatrixBase)]:
...
return result

That gets around the only case that won't work due to visibility,
without forcing all the other cases to use ugly strings.

Joe
Skip Montanaro
2004-03-24 21:44:25 UTC
Permalink
Joe> I think my preferred way to do this would be just

Joe> class MatrixBase:
Joe> pass

Joe> class Matrix(MatrixBase):
Joe> def __mul__(self, other) [multimethod(MatrixBase, MatrixBase)]:
Joe> ...
Joe> return result

Good point.

Skip
Andrew Bennetts
2004-03-24 06:00:57 UTC
Permalink
Okay, but can you explain the mechanism or point me to the original post? I
can't find it on Google (probably too recent). Multimethod(a,b)() won't
know that each call is for __mul__ will it (maybe it will peek at the
func_name attribute)? From what you posted, all I saw was multiple
definitions of __mul__. Only the last one will be present as a method in
the class's definition.
Skip
Please take what I posted just as an idea (it is not even my idea!), not as
an implementation proposal.
I don't have an implementation, but I am pretty much convinced that it is
possible and not that hard.
I am not suggesting that we put multimethods in Python 2.4.
I am just noticing that the notation could be used to denote
multimethods too, as Ville Vainio suggested (sorry for the mispelling,
Ville!).
Just to support your point that the decorator idea is a Pandora box, and
we can extract anything from it ;)
But the decorator syntax doesn't help with this case at all.

You *could* hack up multimethods today, though, by abusing metaclasses:

class Foo:
...
class __mul__(multimethod):
def Matrix(self, other):
...
def Vector(self, other):
...

The definition of multimethod would be something like this:

class _multimethod(type):
def __new__(cls, name, bases, d):
if d.get('__metaclass__') == _multimethod:
return super(_multimethod, cls).__new__(cls, name, bases, d)
else:
def multi(self, *args):
try:
meth = d['_'.join([arg.__class__.__name__ for arg in args])]
except KeyError:
raise TypeError, 'No multimethod for type(s) %r' % map(type, args)
else:
return meth(self, *args)
return multi

class multimethod:
__metaclass__ = _multimethod

This is only very lightly tested. Extending this to cope with subclasses of
types, etc, is left as an exercise for the reader.

FWIW, here is what I tested it with:

if __name__ == '__main__':
class C:
class __mul__(multimethod):
def int(self, other):
return 'Integer multipication!'
def C(self, other):
return 'Selfness squared!'

class test(multimethod):
def int_str(self, i, s):
return 'Testing %r, %r' % (i, s)
c = C()
print c * 1
print c * c
try:
print c * "x"
except TypeError, e:
print e.args[0]

print c.test(2, 'two')
try:
print c.test('x', 'y')
except TypeError, e:
print e.args[0]

-Andrew.
Anders J. Munch
2004-03-24 21:50:40 UTC
Permalink
Ick! Having passed through PEP 308, I just prefer directly agree with
Guido's
decision without any votation at all ;)
Half of the 308 problem was because Guido wasn't willing to make a
decision.
The other half was the voting method used.
The third half of the problem was the people got wound up about
details in the voting system. Details that would have been relevant
in a winner-takes-all vote, but were less than interesting in a
BDFL-guidance-vote.

- Anders
John Roth
2004-03-25 13:35:34 UTC
Permalink
"Anders J. Munch" <andersjm at inbound.dk> wrote in message
Post by Anders J. Munch
Ick! Having passed through PEP 308, I just prefer directly agree with
Guido's
decision without any votation at all ;)
Half of the 308 problem was because Guido wasn't willing to make a
decision.
The other half was the voting method used.
The third half of the problem was the people got wound up about
details in the voting system. Details that would have been relevant
in a winner-takes-all vote, but were less than interesting in a
BDFL-guidance-vote.
I don't remember that at all, although it may have taken
place on the deveoper's list (which I don't read). What
I remember is a simple statement of how the vote would
be conducted. Of course, I could simply have skipped that
debate as being uninteresting.

John Roth
Post by Anders J. Munch
- Anders
Greg Ewing (using news.cis.dfn.de)
2004-03-24 06:15:42 UTC
Permalink
The syntax can be extended, i.e. "def foo() as generator" looks to me
to be a lot more explicit than "def foo()" followed by having the
compiler search the function body for a yield statement in order
to decide if it's a generator.
While I happen to agree that generators ought to be
created using something other than a plain "def",
it couldn't be done this way. Generators need to be
compiled differently from the beginning -- you can't
turn an ordinary function into a generator by
wrapping it in anything.
--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg
Eric
2004-03-25 17:06:06 UTC
Permalink
Joe Mason <joe at notcharles.ca> wrote in message
(The crux of the flamewar is that Condorcet tends to elect
"compromise candidates" who are few peoples' favourite but palatable to
most, while IRV is more likely to elect people that are the first choice
of a large block but hated by others. Which is preferable is a matter
of philosophy.)
<snip>
I cannot see how IRV and a Condorcet method would differ should a
certain candidate receive a large block of first place votes. For if a
certain Candidate is the first choice of a large block, it has a
distinct advantage in both IRV and Condorcet over all the other
candidates, but will not necessarily be the winner in either.
I think the contentious scenario was two large opposed blocks and a
[redid the ballots to put them in a form accepted by online calculators]
49:A>C>B
48:B>C>A
3:C
[snip]
So C is the winner. But, say the IRV proponents, only 3% of the
population actually want C to win!
Yes, I understand this is the claim that some IRV proponents would make.
However, no such definitive statement can honestly be asserted based on
those ballots. With most (if not all) ranked ballot methods (including
Condorcet and IRV), if a voter truly does not want a Candidate to win,
the way the indicate that is by leaving that Candidate unranked.

In this case, the A & B voters did not leave C unranked. Both groups
clearly stated that they preferred C to their primary opponent.

What we do not know, because neither IRV nor Condorcet collects the
strength of the preference (there are inherent problems with doing such
a thing which is beyond the scope of this message), is how strong the
preference is for the A & B voters for C rather then their primary
opponent.

For example, it could be that:
(The numbers in parenthesis indicate the strength of how much the
candidates are liked on a linear 0 - 100 scale)

49 A(100) > C(99) > B(0)
48 B(100) > C(99) > A(0)
3: C(100)

Should this be true, one would be hard pressed to develop a credible and
compelling argument that C should not be the winner.

However, it is also possible that:

49 A(100) > C(1) > B(0)
48 B(100) > C(1) > A(0)
3: C(100)

In which case, an argument can be made that C should not be the winner.

But, like I stated, neither IRV nor Condorcet collect such information,
so, what is the fairest way to deal with this?

Both IRV and Condorcet both assume that your top choice has a strength
of 100. However, they differ greatly in the assumptions made about lower
preferences.

With IRV, if your top choice is eliminated, your second choice is
automatically promoted to a strength of 100, regardless of how you
actually feel about that candidate. For example, say a voter had voted
(I've included theoretical preference strengths):

A(100) > B(1) > C(0)

but that A was eliminated. IRV recasts their vote as:

B(100) > C(0)

A Condorcet method assumes that the strengths of the preferences should
be distributed evenly among the ranked candidates. I find this
assumption to be far more compelling in the general case because it is
simply not believable that all the A & B voters in any genuine situation
would have the exact same feelings towards the other candidates.
Condorcet makes assumptions about the average feeling towards the other
candidates.

It is also interesting to note that nearly every other ranked ballot
method will also select C as the winner. IRV seems to stand alone in the
assertion that the winner should be someone other then C.

Based on:
http://cec.wustl.edu/~rhl1/rbvote/calc.html
http://condorcet.ericgorr.net/
http://www.duniho.com/remote-mamcalc.php
Anyway, the real question in this forum is whether Condorcet is a good
method for voting on a PEP.
If you would like to see how Condorcet and IRV behave with in genuine
ranked ballot elections, you can check out:

http://ericgorr.net/library/tiki-index.php?page=BallotArchives

If anyone is aware of other places where ranked ballots from real
elections can be collected, let me know. I am working on collecting the
ranked ballots for the uk.* USENET hierarchy votes.
U-CDK_CHARLES\Charles
2004-03-26 15:29:07 UTC
Permalink
Off the top of my head, the only inherent problem with it that I see is
that what happens if 'NONE' turns out to be the Condorcet Winner?
Then the electorate has expressed its sincere opinion that no government
at all is better than any of the candidates. Government should be
immediately dissolved and anarchy instituted.
Have you ever wondered why it is that anarchists have a symbol to unite
them? :)
Joe Mason
2004-03-25 20:14:36 UTC
Permalink
Post by Eric
Yes, I understand this is the claim that some IRV proponents would make.
However, no such definitive statement can honestly be asserted based on
those ballots. With most (if not all) ranked ballot methods (including
Condorcet and IRV), if a voter truly does not want a Candidate to win,
the way the indicate that is by leaving that Candidate unranked.
In this case, the A & B voters did not leave C unranked. Both groups
clearly stated that they preferred C to their primary opponent.
What we do not know, because neither IRV nor Condorcet collects the
strength of the preference (there are inherent problems with doing such
a thing which is beyond the scope of this message), is how strong the
preference is for the A & B voters for C rather then their primary
opponent.
Well, not too much beyond: the inherent problem that I know is that
there's no incentive to put anything other than 100 or 0 to maximize the
strength of their vote. (After all, if you prefer A to C by 100 to 99,
but putting that you prefer it by 100 to 1 will get A elected, why not
do it? Lack of perfect information makes this kind of strategic voting
dangerous, but that doesn't mean people won't try it, and that
destabilizes elections.)
Post by Eric
(The numbers in parenthesis indicate the strength of how much the
candidates are liked on a linear 0 - 100 scale)
49 A(100) > C(99) > B(0)
48 B(100) > C(99) > A(0)
3: C(100)
Should this be true, one would be hard pressed to develop a credible and
compelling argument that C should not be the winner.
49 A(100) > C(1) > B(0)
48 B(100) > C(1) > A(0)
3: C(100)
In which case, an argument can be made that C should not be the winner.
But, like I stated, neither IRV nor Condorcet collect such information,
so, what is the fairest way to deal with this?
One interesting approach was brought up on the elections list just
before I stopped reading it. (Wait, don't I recognize your name from
there? You'd be more familiar with it than I, but I'll bring it up
anyway, on the assumption that there are other interested readers out
there.)

Add an implicit "none" candidate to separate rankings of actual
preferences from least of the evils. Your first case becomes "A > C >
none > B", and the second (probably) "A > none > C > B". So in the
latter case, the voter is saying that they prefer C to B, but only if
forced to choose between them due to overwhelming preference by the rest
of the electorate.

I'd be grateful if you can point me to a thorough analysis of this idea,
since it sounds reasonable to me but I don't really have the background
to evaluate it.
Post by Eric
With IRV, if your top choice is eliminated, your second choice is
automatically promoted to a strength of 100, regardless of how you
actually feel about that candidate. For example, say a voter had voted
Ah, thanks for pointing that out. I knew it sounded a little fishy.

(So much nicer to have nobody from the other side cluttering up the
debate...)

Joe
Joe Mason
2004-03-25 22:26:01 UTC
Permalink
Off the top of my head, the only inherent problem with it that I see is
that what happens if 'NONE' turns out to be the Condorcet Winner?
Then the electorate has expressed its sincere opinion that no government
at all is better than any of the candidates. Government should be
immediately dissolved and anarchy instituted.

I have friends who would be delighted.

It just occured to me that this is the exact case we have for PEP's, but
with NONE as an explicit option.

Joe
Eric
2004-03-25 22:57:48 UTC
Permalink
Off the top of my head, the only inherent problem with it that I see is
that what happens if 'NONE' turns out to be the Condorcet Winner?
Then the electorate has expressed its sincere opinion that no government
at all is better than any of the candidates. Government should be
immediately dissolved and anarchy instituted.
I have friends who would be delighted.
It just occured to me that this is the exact case we have for PEP's, but
with NONE as an explicit option.
Seems reasonable to me when taking no action is a reasonable option.
Eric
2004-03-25 21:18:21 UTC
Permalink
Post by Joe Mason
One interesting approach was brought up on the elections list just
before I stopped reading it. (Wait, don't I recognize your name from
there? You'd be more familiar with it than I, but I'll bring it up
anyway, on the assumption that there are other interested readers out
there.)
Add an implicit "none" candidate to separate rankings of actual
preferences from least of the evils. Your first case becomes "A > C >
none > B", and the second (probably) "A > none > C > B". So in the
latter case, the voter is saying that they prefer C to B, but only if
forced to choose between them due to overwhelming preference by the rest
of the electorate.
I'd be grateful if you can point me to a thorough analysis of this idea,
since it sounds reasonable to me but I don't really have the background
to evaluate it.
You can find the full archives at:

http://lists.electorama.com/pipermail/election-methods-electorama.com/
Post by Joe Mason
[\s]*none[\s]*>
to find various messages related to it.

It does seem to be an interesting idea and one that I was unfamiliar
with...I have not investigated it fully.

Off the top of my head, the only inherent problem with it that I see is
that what happens if 'NONE' turns out to be the Condorcet Winner?

An election is about finding a winner, providing for a result where this
would not occur seems problematic unless one is using a Condorcet method
where a strict (but not necessarily unique) ordering of the candidates
can be found, in which case one can just select the #2 person.

One such method that I am aware of is MAM
(http://www.alumni.caltech.edu/~seppley/).
AdSR
2004-03-22 22:33:54 UTC
Permalink
Skip Montanaro <skip at pobox.com> wrote...
I will reiterate my comment from before: PEP 318 is about more than just
static and class methods. Here are a few examples from the python-dev
discussion.
You (and Stephen Horne) got your point.

Passing that grep over 3rd party packages might be interesting too...

On other note, regarding the initial subject of this thread: Is there
going to be any voting poll about the syntax in foreseeable future? My
preferred style would be the "standard" one proposed in the PEP.

Cheers,

AdSR
Eric
2004-03-24 20:33:07 UTC
Permalink
(The crux of the flamewar is that Condorcet tends to elect
"compromise candidates" who are few peoples' favourite but palatable to
most, while IRV is more likely to elect people that are the first choice
of a large block but hated by others. Which is preferable is a matter
of philosophy.)
A Condorcet method will, if one exists, elect the candidate (called a
Condorcet Winner) that would win in a two-way contest with every other
candidate. When a Condorcet Winner does not exist, there various
methods for finding the winner, but you may be surprised in how often
a Condorcet Winner appears in various ranked ballot datasets I've
collected.

IRV only guarantees that it's winner will defeat at least one of the
other candidates in a two-way contest.

I cannot see how IRV and a Condorcet method would differ should a
certain candidate receive a large block of first place votes. For if a
certain Candidate is the first choice of a large block, it has a
distinct advantage in both IRV and Condorcet over all the other
candidates, but will not necessarily be the winner in either.
Joe Mason
2004-03-24 22:14:37 UTC
Permalink
(The crux of the flamewar is that Condorcet tends to elect
"compromise candidates" who are few peoples' favourite but palatable to
most, while IRV is more likely to elect people that are the first choice
of a large block but hated by others. Which is preferable is a matter
of philosophy.)
<snip>
I cannot see how IRV and a Condorcet method would differ should a
certain candidate receive a large block of first place votes. For if a
certain Candidate is the first choice of a large block, it has a
distinct advantage in both IRV and Condorcet over all the other
candidates, but will not necessarily be the winner in either.
I think the contentious scenario was two large opposed blocks and a
small centrist block (C):

49 A > C > B
48 B > C > A
3 C > B=A

In Condorcet, we get:

A>B 49
A>C 0
B>A 48
B>C 48
C>A 51
C>B 52

So C is the winner. But, say the IRV proponents, only 3% of the
population actually want C to win!

With IRV, we note that C has the fewest 1st place votes and drop all C
ballots. Now we have:

49 A > B
48 B > A
3 (no preference)

So A wins by an incredibly narrow margin.

Centrists say that Condorcet is correct, because obviously we should
have the winner that everybody at least mildly approves of. Partisans
say that if the A/B split is pretty traditional and shifts by only a few
percentage points, then endorsing Condorcet is essentially endorsing a
centrist as an acclaimed winner. Many (at least, the vocal ones) appear
to prefer the 50% chance that their party will get in for a cycle even
at the cost of a 50% chance of total failure.

But then, if they *really* didn't want C to win, they'd vote them last,
wouldn't they? I'm just reporting the flamewar here, I'm not saying I
agree - I'm a Condorcet fan myself, but then, I'd vote for the centrist
party if there was one. (Well, there is or was in my country, but
that's another story...)

To avoid tarring the IRV proponents as extremists, I should mention that
another argument in favour is that it's a smaller step from IRV to a
proportional method like STV. So electoral reform in the USA, for
instance, it makes more sense to use IRV to elect the President and STV
to choose a proportional Congress than it does to have completely
separate methods for each vote.

Anyway, the real question in this forum is whether Condorcet is a good
method for voting on a PEP. The standard objections - especially the
difficulty to explain to a non-technical audience - don't seem to apply
here.

Joe
Skip Montanaro
2004-03-23 15:36:28 UTC
Permalink
Skip> It will probably be a BDFL pronouncement. After all, that's why
Skip> he's the

Ville> Of course it will - still, that didn't stop us from voting before
Ville> :-).

You have my approval to hold a vote. Based on the results of the ternary
operator vote, I think it best that any results you gather not be appended
to PEP 318. :-)

Skip
Joe Mason
2004-03-24 15:39:27 UTC
Permalink
There's a fair amount of analysis on the best method, actually. In this
case, "Guido's choice" is probably a good one. (Normally it's done by
counting the relative magnitude of each pairwise win and things like
that. I'm not clear on whether the existance of a cycle means everyone
in the cycle is really a tossup or if it's fuzzier.)
Your comments indicate you're probably much more in touch with
voting method theory than I am! My take on it is that it needs to be
done using availible data - that is, the original votes.
I spent a while hanging out on the election-methods mailing list after
the last US election, and discovered it was basically a big
Condorcet-vs-IRV flame war. There's a good summary of lots of methods at
http://electionmethods.org/, although it's pretty biased towards
Condorcet.

(The crux of the flamewar is that Condorcet tends to elect
"compromise candidates" who are few peoples' favourite but palatable to
most, while IRV is more likely to elect people that are the first choice
of a large block but hated by others. Which is preferable is a matter
of philosophy.)
I don't know if the original data on PEP 308 is still availible. It would
be interesting (although it might just be throwing fuel on a fire that
should
be allowed to die down) to reanalyze it (and I don't know if I'm
volunteering
to do it or not!)
It's at the end of http://www.python.org/peps/pep-0308.html. I haven't
looked at it in detail, but it looks wacky.

Joe
Joe Mason
2004-03-24 17:37:47 UTC
Permalink
Post by Joe Mason
I don't know if the original data on PEP 308 is still availible. It would
be interesting (although it might just be throwing fuel on a fire that
should
be allowed to die down) to reanalyze it (and I don't know if I'm
volunteering
to do it or not!)
It's at the end of http://www.python.org/peps/pep-0308.html. I haven't
looked at it in detail, but it looks wacky.
Sorry, that's just a summary. It's not possible to reconstruct the
original ballots from that, I don't think. I've now looked at it in
detail - it *is* wacky.

Joe
Skip Montanaro
2004-03-24 16:05:58 UTC
Permalink
Joe> (The crux of the flamewar is that Condorcet tends to elect
Joe> "compromise candidates" who are few peoples' favourite but
Joe> palatable to most, while IRV is more likely to elect people that
Joe> are the first choice of a large block but hated by others. Which
Joe> is preferable is a matter of philosophy.)

Given the results of the last couple of presidential elections in the US I'd
say our current system tends to "elect people that are the first choice of a
large block but hated by others" as well. It has little to do with the
voting method though, and more to do with the candidates pandering to the
more extreme segments of their constituencies (*). Perhaps one or the other
voting method would tend to move things back toward the middle.

Skip

(*) My personal opinion is that most Republican party apparatchiks try to
brand as "liberal" anyone whose views are at all to the left of say, Rush
Limbaugh.
John Roth
2004-03-25 13:57:27 UTC
Permalink
"Michele Simionato" <michele.simionato at poste.it> wrote in message
Here is an useful reference (including a Python implementation of
http://electionmethods.org/
Michele Simionato
Thanks to you and Eric for the references.

John Roth
John Roth
2004-03-23 15:29:48 UTC
Permalink
"Michele Simionato" <michele.simionato at poste.it> wrote in message
Ville Vainio <ville at spammers.com> wrote in message
Post by Skip Montanaro
Skip> It will probably be a BDFL pronouncement. After all, that's
Skip> why he's the
Of course it will - still, that didn't stop us from voting before :-).
Ick! Having passed through PEP 308, I just prefer directly agree with
Guido's
decision without any votation at all ;)
Half of the 308 problem was because Guido wasn't willing to make a decision.

The other half was the voting method used. It irked me at the
time, but I didn't have the background to do a rational critique;
I just knew up front that the specified conditions would kill the
idea.

I'd support the "Majority Rules" algorithm. See the article in
Scientific American (March, 2004 edition) If you run the
algorithm to generate the whole list (rather than just the top
entry) and regard cycles as ties, then it's got the very pleasant
attribute that if a candidate is placed above another, a majority of
the votes case did, in fact, put those two candidates in that
order.

Cycles are, of course, a problem with that method; you just
have to do something else to break the tie. Almost anything will
do if you filter the ballots to just include the issues/candidates
involved in the tie.

John Roth
Michele Simionato
Michele Simionato
2004-03-25 12:55:06 UTC
Permalink
Here is an useful reference (including a Python implementation of
the Condorcet method):

http://electionmethods.org/

Michele Simionato
Joe Mason
2004-03-24 13:33:52 UTC
Permalink
Post by John Roth
I'd support the "Majority Rules" algorithm. See the article in
Scientific American (March, 2004 edition) If you run the
There's no free version of the article online, but the authors are given
as Partha Dasgupta and Eric Maskin, and googling on the latter led me to
http://www.sss.ias.edu/papers/papereleven.pdf.

I've only skimmed it, but I'm a little confused by his terms in that
paper. He first starts, it seems, by defining "plurality/majority rule"
as simply allowing everybody to cast a vote for one candidate, and "a
candidate wins if he or she garners, respectively, a plurality or
majority of all votes cast" - in other words, the obvious way. But
later on he says, "in true majority rule the winner is the candidate who
beats everyone else in a pairwise comparison", which is Condorcet's
Method.

So I'll assume the "Majority Rules" algorithm is yet another dumbed-down
name for Pairwise Voting aka Condorcet's Method, and say that I like it,
but there's not much point using it for a yes/no vote. If you're saying to
let people vote on several alternate syntaxes plus "reject outright" -
perfect.
Post by John Roth
Cycles are, of course, a problem with that method; you just
have to do something else to break the tie. Almost anything will
do if you filter the ballots to just include the issues/candidates
involved in the tie.
There's a fair amount of analysis on the best method, actually. In this
case, "Guido's choice" is probably a good one. (Normally it's done by
counting the relative magnitude of each pairwise win and things like
that. I'm not clear on whether the existance of a cycle means everyone
in the cycle is really a tossup or if it's fuzzier.)

Joe
Skip Montanaro
2004-03-22 22:47:26 UTC
Permalink
AdSR> On other note, regarding the initial subject of this thread: Is
AdSR> there going to be any voting poll about the syntax in foreseeable
AdSR> future? My preferred style would be the "standard" one proposed in
AdSR> the PEP.

It will probably be a BDFL pronouncement. After all, that's why he's the
BDFL. I have a modified version of PEP 318 in my mailbox I need to read,
edit and check in. I'll try to get to it later today.

Skip
Ville Vainio
2004-03-23 16:36:39 UTC
Permalink
Michele> def __mul__(self,other) as multimethod(Vector,Scalar):
Michele> ...

Michele> etc.

Michele> Way cool, actually :)

Skip> Okay, but can you explain the mechanism or point me to the
Skip> original post? I can't find it on Google (probably too
Skip> recent). Multimethod(a,b)() won't

Too recent, or it's because my name is Vainio, not Vanio :-).

Skip> know that each call is for __mul__ will it (maybe it will
Skip> peek at the func_name attribute)? From what you posted, all
Skip> I saw was multiple definitions of __mul__. Only the last
Skip> one will be present as a method in the class's definition.

My post didn't offer anything beyond what Michele suggested. I didn't
present any clear mechanism to do the trick, but I figured it ought to
be possible via func_name attribute, as you suggest. Every declaration
would inject a mapping to some global data structure, and the actual
function call would look it up somehow.

Perhaps the actual callable should be bound to the name by doing

f = make_multimethod("f")

after doing all the declarations. Sadly this limits the usability by
preventing declaration of new variations after the final binding.

Obviously the wrapper function could always return the multimethod
dispatcher for that function name to force the resulting binding to be
the dispatcher at all times.

OTOH, I might be misunderstanding something.
--
Ville Vainio http://tinyurl.com/2prnb
DH
2004-03-23 00:34:32 UTC
Permalink
"def foo() as staticmethod" certainly looks best to me aesthetically.
It does look better with simple examples. But think of other potential
uses for an "as" keyword, and it might have problems.
Visual Basic uses "as" in function declarations to declare types, not
for function decorators.
VB example:
Function foo (x as Integer, y as Float) as Integer

Possible future Python example that uses "as" differently:

def foo(x as int, y as float) as int:
"this function returns an integer, and takes an int & float params"

If we use the list syntax for decorators instead of "as", we might be
able to do something like:

def foo(x as int, y as float) [synchronized, classmethod] as int:
...

See this thread:
http://mail.python.org/pipermail/python-dev/2004-February/thread.html#42780
Joe Mason
2004-03-23 08:39:19 UTC
Permalink
Not a big fan of that syntax - I have to keep the parameter names and
types in sync by counting.
is a little better, except now we're getting very verbose.
For decorators in general, I like
You get an explicit list syntax, but it's set off by a keyword so they don't
run together to the eye. Because the keyword keeps it unambiguous, you
could even allow a tuple instead of a list: "def foo() as (x, y, z)".
So I definitely favour a keyword, but perhaps "as" is to generic. What
about "has" or "with"?
Come to think of it, the verbose example with "has" becomes (assuming a
shorter decorator name):

def foo(x has sig(int), y has sig(float)) has returns(int):

Since you might not need the full list syntax for a single decorator.
That's not too bad looking.

(No, I'm not seriously proposing decorators on parameters at this point.)

Joe
Stephen Horne
2004-03-25 13:31:24 UTC
Permalink
On Wed, 24 Mar 2004 17:22:31 -0700, David MacQuigg <dmq at gain.com>
On Wed, 24 Mar 2004 09:48:00 +0000, Stephen Horne
Symbols (and worse, odd combinations of symbols where it's hard to
tell which symbols are part of the same syntax rule and which are not)
that you've not seen before tend not to suggest where in the manual to
look.
We need a single unique symbol, one that doesn't conflict with
existing syntax, and has no special meaning itself. By using this
symbol for extensions of *many* statements, we minimize the amount of
new words or syntax users have to learn.
So each statement will only ever be extended once, and for one
purpose?
If 'as' is too generic, then what about 'decorators'. That clearly
states what follows and can easily be looked up, making the 'RTFM'
attitude justifiable.
"Decorators" is clear as to its meaning, but a little too long,
especially if it causes statments with multiple decorators to run over
one line. It also limits the syntax to this one statement. You would
not want to extend the syntax of a print statement, for example, with
decorators=(separator = ' ', terminator = '\n').
If it's a secondary syntax, learned late and rarely used, then a long,
explicit keyword is just what is needed. You can look it up, of
course, but it is far better for readability if you don't need to - if
the explicit keyword is itself a sufficient reminder. After all, it's
not just newbies that we should worry about - what about people like
me, who use several languages and sometimes go months without using
Python at all?

As for the print statement, I wouldn't use a decorating approach at
all. And neither did you, for that matter - what you described could
not be implemented using decorator functions.


What I might use is something like...

pragma printoptions ( separator = ' ', terminator = '\n' ) :
print "whatever"
print "something else"

...as a general way of providing standard-behaviour-overriding flags
for built-ins.

Except that I don't see this as a real problem. Print works well as a
quick way to get results out of short scripts, where perfect
formatting of those results is unimportant. In more sophisticated
programs, other I/O mechanisms are better suited to the job.
Not really. What happens when there are two different ways of
modifying the same syntax? When the first one was defined with no
awareness that another kind of modification might be needed later?
I guess I'll need an example here. Seems like the potential for
future incompatibilities is *greater* if the syntax itself has
"special meaning" ( e.g. "decorators" instead of [] ).
If a statement has a 'decorators' extension syntax, it can always add
a 'widget' or 'blodget' extension syntax later, being clear to both
the parser and the reader.

Contrast the case with a single extension-denoting symbol - '$'
perhaps. If it gets re-used for several extensions to the same
statement, you get the real possibility that the only way to tell the
different extensions apart is the ordering. Which, at the very least,
means a far greater probability of having to reread the documentation
to figure out what something means - ie bad readability.
Also, I think in programs where the syntax is useful, it will be used
a lot, so ease of typing and readability is an issue.
So do I - that is why I suggested a preamble method, where the list of
decorators to apply to a whole group of functions is only written
once.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
David MacQuigg
2004-03-24 23:44:13 UTC
Permalink
[snip]
Re-use of the same symbol to mean "modified syntax" in any context,
will avoid the Perl problem (too many symbols with idiosyncratic
meanings). Beginners will learn very quickly that the special symbol
means "similar to the standard syntax, but RTFM if you don't yet know
this variation". Locating the right documentation will be easy,
because it will always be at the end of the discussion on the normal
form of the statement.
I'm not clear on what you mean by "modified syntax". If it's accepted
into the language, it's a standard syntax, so marking it with a symbol
makes no sense. If you mean to mark things for preprocessors, sure, a
standard symbol would be a great idea, but we're not talking about
preprocessors here.
By "modified syntax" I mean an extension of the current standard
syntax, e.g. the "decorators" we are now considering adding to the
current standard function definition syntax, or the extra "arguments"
we were discussing for the print statement.

[snip]
Does it make sense to have a general symbol for modifications of the
simple standard syntax?
If you mean marking some bits of the language as being second-class,
absolutely not.
"Second class" has negative connotations. A better description would
be a "modification" or "extension" of the basic syntax, "secondary"
perhaps in the order in which beginners would learn the language, or
in the frequency with which the new syntax is used in comparison to
the basic syntax.
It may make sense to have comment character meaning "ignore the
keyword after this - it's for an external, mechanical tool". Doxygen
and others already have conventions for this (not to mention the #!
syntax), but it would be helpful to have a common agreed-on format. I
think that would arise better as something the various tool vendors
evolve among themselves, though.
There is a similarity here in the use of comment characters to tell
Python - "Ignore this. It is for an external tool." - but the
difference here is we are not talking not about external tools with
their own command syntax. In a sense, we are "adding tools" to the
language by extending its syntax, so maybe the analogy to commented
commands has some value. Certainly, we wouldn't tolerate a variety of
comment characters depending on which external tool the command was
for.

I guess its time for some concrete examples. For the function
decorator syntax, I would use

def func(arg1, arg2, ...) @(dec1, dec2, ...):

This is very similar to the syntax proposed in PEP 318, the
differences being the use of a special symbol, and () instead of [].
This satisfies all of the design goals in PEP 318 (which are a very
good statement of some general design goals for any extension syntax).

The advantage over what is proposed in PEP 318 is that this syntax is
more universally applicable to other statements we may want to extend.
The [] alone are good for just this one statement, so we will have to
learn a different syntax every time we want to add something new to an
existing statement.

Consider the print statement, recently discussed in this forum (Paul
Prescod 3/7/2004). We can't say

print [separator = ' ', terminator = '\n'] x, y, z

because that looks like we are trying to print a list. A special
symbol will make it clear -- this is a modification of the normal
print statement, not just another item to be printed.

The key design goal is to *mimimize* the number of syntactic
variations we have to introduce as we extend the language. The
proposed @() syntax seems like it would work in *many* places, and it
always has the same general meaning -- "This is a modification of the
basic syntax you already understand." To learn the details of a
particular variation, you read the manual on the particular statement
where the variation occurs. @(staticmethod) - read about it under
"def". print @(...) - read the manual on 'print'.

Here is another example. Now that we have function decorators, a very
sensible thing to do is to use a "generator" decorator to make a
function into a generator. We should then deprecate the 'yield'
keyword as it is no longer necessary, and may actually cause problems
when you change a function into a generator, but forget to change
'return' to 'yield'.

That leaves one little problem. What do we do for the unusual case
where you want both a "generator return" and a return that terminates
the function, just as if it were not a generator. This problem can be
solved almost without any documentation once our "modified syntax" is
well understood. '@return' means 'this is a different kind of return'
-- in the case of generators, a return that terminates the generator.

There are lots of examples like this, but I don't want to get into a
long discussion on the merits of each example. We have to assume that
changes and extensions will come. I would like to discuss the idea of
using a common syntax for all these extensions. I would also like to
avoid a debate about the @ sign ( I haven't been burned by Perl. :>)
We can choose another symbol. The requirements are no conflict with
existing syntax, and neutral meaning.

The proposed syntax for PEP 318 is good, and I am glad it is turning
away from special words. The proposed syntax satisfies the design
goals for extending the syntax of one statement. I would just like to
have something more general that works for other statements as well.

-- Dave
Joe Mason
2004-03-25 00:24:34 UTC
Permalink
Post by David MacQuigg
I'm not clear on what you mean by "modified syntax". If it's accepted
into the language, it's a standard syntax, so marking it with a symbol
makes no sense. If you mean to mark things for preprocessors, sure, a
standard symbol would be a great idea, but we're not talking about
preprocessors here.
By "modified syntax" I mean an extension of the current standard
syntax, e.g. the "decorators" we are now considering adding to the
current standard function definition syntax, or the extra "arguments"
we were discussing for the print statement.
Why is the current standard priveleged? A few years down the road, half
of the basic functionality of the language will have special symbols in
front of it, just because it was proposed after an arbitrary date. The
symbol wouldn't tell anything about the features it's attached to except
the relatively uninteresting fact of when they were added to the
language.

The only use I can see for such information is to mark features which
require a recent interpreter, and a single otherwise meaningless symbol
is a particulary ugly way to do it.
Post by David MacQuigg
If you mean marking some bits of the language as being second-class,
absolutely not.
"Second class" has negative connotations. A better description would
Yes, I used it deliberately. Singling out syntax as non-basic makes it
second class. What if you predict wrong? What if it catches on like
wildfire and gets taught in Day 2 of every class? Now your symbol has
no meaning at all, because it's part of what everybody considers basic.
Post by David MacQuigg
There is a similarity here in the use of comment characters to tell
Python - "Ignore this. It is for an external tool." - but the
difference here is we are not talking not about external tools with
their own command syntax. In a sense, we are "adding tools" to the
If we're not, then I think it's an idea with almost no value at all.
Post by David MacQuigg
language by extending its syntax, so maybe the analogy to commented
commands has some value. Certainly, we wouldn't tolerate a variety of
comment characters depending on which external tool the command was
for.
No, but we might tolerate a comment character followed by a tag whose
first item was the tool identifier. But like you said, different
debate.
Post by David MacQuigg
Consider the print statement, recently discussed in this forum (Paul
Prescod 3/7/2004). We can't say
Yep, I recall you bringing this up at the time and not convincing
anybody.
Post by David MacQuigg
The key design goal is to *mimimize* the number of syntactic
variations we have to introduce as we extend the language. The
always has the same general meaning -- "This is a modification of the
basic syntax you already understand." To learn the details of a
That's a meaning that's too general to be of any use. If the new syntax
is successful, it's no longer a modification, it's a basic syntax now,
Post by David MacQuigg
particular variation, you read the manual on the particular statement
People will recognize things they don't understand without an @ sign, by
virtue of not knowing what they are. @(staticmethod) won't tell me to
look under def any more than [staticmethod] will, and if I see something
I don't recognize near a print statement, I'll already know to read the
manual on 'print'.

BTW, I favour making the decorators a single name, list or tuple, so you
can say any of the following:

def foo() staticmethod:
pass
def foo() [staticmethod]:
pass
def foo() (staticmethod,):
pass
def foo() staticmethod,:
pass

Feels cleaner in the sense that no new constructs are being added, you
can just put an existing one in a place it wasn't allowed before.
That's true even if the actual tuple, as an optimization, isn't created.
I dunno if this'd make it much harder to parse, though. (I also like a
keyword, but I'm not hugely attached either way.)

Joe
Stephen Horne
2004-03-23 09:55:33 UTC
Permalink
Post by Joe Mason
Come to think of it, the verbose example with "has" becomes (assuming a
<snip>
Post by Joe Mason
(No, I'm not seriously proposing decorators on parameters at this point.)
Damn - I thought it was a cool idea!

After all, in implementation terms it means little more than a few
extra statements at the top of the function - apart from being more
clearly associated with the call parameters in the definition, there
is nothing much new.

I'd just like to assert that more than one decorator may be needed,
perhaps in cases such as...

def foo ( x [accepts(int), inrange(0,100)] ) [returns(int)] :

And assuming that these are mainly self-documentation and aids to
debugging and testing, it could be quite useful to get rid of most of
the overhead by perhaps having an option to ignore specified
decorators at compile time.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Stephen Horne
2004-03-24 09:48:00 UTC
Permalink
On Tue, 23 Mar 2004 18:40:07 -0700, David MacQuigg <dmq at gain.com>
Beginners will learn very quickly that the special symbol
means "similar to the standard syntax, but RTFM if you don't yet know
this variation".
RTFM doesn't really work when you don't have a keyword to look up.
Symbols (and worse, odd combinations of symbols where it's hard to
tell which symbols are part of the same syntax rule and which are not)
that you've not seen before tend not to suggest where in the manual to
look.

If 'as' is too generic, then what about 'decorators'. That clearly
states what follows and can easily be looked up, making the 'RTFM'
attitude justifiable.

OTOH, there is the 'def' keyword for people to look up already in this
case.
Does it make sense to have a general symbol for modifications of the
simple standard syntax?
Not really. What happens when there are two different ways of
modifying the same syntax? When the first one was defined with no
awareness that another kind of modification might be needed later?

My opinion is that it is better to try to avoid symbols and
over-generic keywords, and to try to be more explicit about what the
modification actually is (in this case, decorators). Particularly
where the syntax may be infrequently used.

Of course short is sweet too. Easy answers are very rare.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
David MacQuigg
2004-03-24 01:40:07 UTC
Permalink
Post by Joe Mason
Not a big fan of that syntax - I have to keep the parameter names and
types in sync by counting.
is a little better, except now we're getting very verbose.
For decorators in general, I like
You get an explicit list syntax, but it's set off by a keyword so they don't
run together to the eye. Because the keyword keeps it unambiguous, you
could even allow a tuple instead of a list: "def foo() as (x, y, z)".
So I definitely favour a keyword, but perhaps "as" is to generic. What
about "has" or "with"?
A less generic keyword will lead to odd phrases like "has int" or
"with staticmethod", or different keywords for each phrase.

The problem with using different keywords for each phrase is - after
we get a collection of them, it becomes a problem for beginners to
remember which one to use in each situation. Even if you try to chose
a keyword which is self-explanatory, a lot of people don't get the
meaning you intended (e.g. 'yield' as in 'produce' rather than 'give
way', 'lambda' as in 'lambda calculus'). What we need is something
short, neutral in meaning, and usable in any situation where you need
to modify an existing syntax, i.e. a symbol.

Re-use of the same symbol to mean "modified syntax" in any context,
will avoid the Perl problem (too many symbols with idiosyncratic
meanings). Beginners will learn very quickly that the special symbol
means "similar to the standard syntax, but RTFM if you don't yet know
this variation". Locating the right documentation will be easy,
because it will always be at the end of the discussion on the normal
form of the statement.

I would give an example, but I worry it will degenerate this
discussion into a debate over whether it looks like something bad in
language X. Let's see if we can discuss this on a higher level first.
Does it make sense to have a general symbol for modifications of the
simple standard syntax?

-- Dave
Post by Joe Mason
Come to think of it, the verbose example with "has" becomes (assuming a
Since you might not need the full list syntax for a single decorator.
That's not too bad looking.
(No, I'm not seriously proposing decorators on parameters at this point.)
Joe
Christian
2004-03-26 18:52:05 UTC
Permalink
I would use the logic of 'preferring the return type at the end', and
perhaps cite Ada, Pascal, Modula 2, etc (Algol as well?) as examples
to follow. But then why do I have no problem with C, C++, Java, C# etc
where the type of the return value usually comes first?
I think, what makes
def foo[return(string)](x=42:int, y=3.14:float):
look so strange, is the fact that you have the function name, the
return type and after that the arguments

def [return (string)] foo(x=42:int, y=3.14:float):
looks more familar, if you concentrate on the 'foo'. But this style
seems to push the function's name into the middle of the declaration.

The fact that there's no problem with reading C, C++ etc comes from the
continuity of their declaration. you can read
void foo(int bar=42)
very fluently, and
def foo [return void] (bar=34:int)
not.

The problem is, that we are accustomated at having the function's name
right before the argumentlist, as it gets called
you don't write foo [x] = (32, 42) but x = foo(32, 42)

IMHO, if the function's name has to be right after the 'def', and 'def'
has to be the first command in a declaration, the return type has to
come last
DH
2004-03-26 19:55:04 UTC
Permalink
Post by Christian
IMHO, if the function's name has to be right after the 'def', and 'def'
has to be the first command in a declaration, the return type has to
come last
I agree.

I think it is neat that function decorators may be used for specifying
argument types and the return types, but I don't think it should be THE
way to always do it, because it is pretty ugly. I still prefer
eventually adding an "as" keyword or some other different syntax from
the decorator syntax, like:

def foo(x as int, y as float) as string:
...

See: http://mail.python.org/pipermail/python-dev/2004-February/042795.html

I know that is like Visual Basic, but it is readable and easier to
understand for novices. If you want to use x:int, y:float instead, you
might as well throw out the colon at the end of the def statement since
it defeats the purpose of it.
Stephen Horne
2004-03-27 16:28:25 UTC
Permalink
Post by DH
Post by Christian
IMHO, if the function's name has to be right after the 'def', and 'def'
has to be the first command in a declaration, the return type has to
come last
I agree.
I think it is neat that function decorators may be used for specifying
argument types and the return types, but I don't think it should be THE
way to always do it, because it is pretty ugly.
It isn't really specifying anything. It is just validating, which we
can do already using normal code. Using decorators to validate
arguments/return types brings them to the front, thus making them more
useful as an example of self-documenting code, but while it's an
interesting application for decorators I don't think it should be
overemphasised.

Ensuring that a parameter has a specific type is often excessive in
Python - if another type supports the right methods in the right way,
it often makes sense to call the function with arguments of that type.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
unknown
2004-03-23 00:43:18 UTC
Permalink
Post by DH
"this function returns an integer, and takes an int & float params"
I think I'd rather use colons for that, like Pascal does, e.g.

def foo:int (x: int, y: float)

hmm, the foo:int doesn't look too good.
Post by DH
If we use the list syntax for decorators instead of "as", we might be
That's sort of nice.
Stephen Horne
2004-03-24 00:34:03 UTC
Permalink
On Tue, 23 Mar 2004 15:56:39 +0100, Marco Bubke <marco at bubke.de>
looks strange to me. Its very unreadable. Its maybe the first time I really
disagree with a Idea of Guido. :-)
I kind of agree, but I wonder how much is out of familiarity.

I would use the logic of 'preferring the return type at the end', and
perhaps cite Ada, Pascal, Modula 2, etc (Algol as well?) as examples
to follow. But then why do I have no problem with C, C++, Java, C# etc
where the type of the return value usually comes first?
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Marco Bubke
2004-03-23 14:56:39 UTC
Permalink
On 22 Mar 2004 16:43:18 -0800, Paul Rubin
Post by unknown
Post by DH
"this function returns an integer, and takes an int & float params"
I think I'd rather use colons for that, like Pascal does, e.g.
def foo:int (x: int, y: float)
hmm, the foo:int doesn't look too good.
In Ada, the type of a function return value is specified using an
explicit keyword ('returns' IIRC). I don't see the need for a unique
keyword just for that, but how about...
I like :
def foo(x=42:int, y=3.14:float) [return(string)]:


def foo[return(string)](x=42:int, y=3.14:float):

looks strange to me. Its very unreadable. Its maybe the first time I really
disagree with a Idea of Guido. :-)
Joe Mason
2004-03-24 13:02:21 UTC
Permalink
Joe> Not a big fan of that syntax - I have to keep the parameter names
Joe> and types in sync by counting.
<snip>
r = arg1 + arg2
print "r:", r
print "rest:", rest
print "kwds:", kwds
return kwds['result']
return r
func3 = accepts(int, int)(func3)
func3 = returns(int)(func3)
print func3(3,5,'a','b','c',other="47")
print func3(3,5,'a','b','c',result=47)
Note that check_accepts() does the counting for you. If you wanted type
declarations in the language I agree it would be more natural for the
arguments and their types to be side-by-side, but then it wouldn't be
Python. ;-)
No, you still have to count when you're actually writing the accepts()
call. It's the same as the complaints against the print "%s%s" % (this,
that) syntax.

Type declarations isn't a great example, because nobody really wants
them in Python, but I'm sure we can come up with decorators that would
be worthwhile for parameters. Markers for IDL or SWIG could be straight
passthroughs that are only there for the parser, for instance. (Of
course, normally you'd just make them comments...)
Joe> For decorators in general, I like
Joe> You get an explicit list syntax, but it's set off by a keyword so
Joe> they don't run together to the eye. Because the keyword keeps it
Joe> unambiguous, you could even allow a tuple instead of a list: "def
Joe> foo() as (x, y, z)".
I don't understand how
is somehow less ambiguous than
It's not any less ambiguous, it's just easier to read.

def foo(blah, asfa, abble, gabble, spelunk)[blah, blagh, briggle]:
pass
def bar(blah, asfa, abble, gabble, spelunk, blah, blagh)[briggle]:
pass
def baz(blah, asfa, abble, gabble)[spelunk, blah, blagh, briggle]:
pass

Can you tell how many parameters each of those has without peering
closely?

def foo(blah, asfa, abble, gabble, spelunk) as [blah, blagh, briggle]:
pass
def bar(blah, asfa, abble, gabble, spelunk, blah, blagh) as [briggle]:
pass
def baz(blah, asfa, abble, gabble) as [spelunk, blah, blagh, briggle]:
pass

That's still horrible, horrible style, but it's easier to take in.
I'd prefer prepositional keywords actually be consistent with their English
usage. It's just line noise and is likely to provide some stumbling block
of programmers whose command of the English language is not stellar. What
we really would say in English is
Define function foo taking no arguments as modified by decor1, decor2
and decor3.
No single preposition is going to read correctly as a replacement for "as
modified by", nor do we need all those other bits of English ("function",
I think "using" and "with" would both read correctly that way, if not
completely. Also "has", but less so. "as", not at all.

Joe
Skip Montanaro
2004-03-23 03:08:27 UTC
Permalink
DH> Possible future Python example that uses "as" differently:

DH> def foo(x as int, y as float) as int:
DH> "this function returns an integer, and takes an int & float params"

With no extension beyond the current PEP 318 proposal, you might postulate
returns() and accepts() decorators:

def foo(x, y) [accepts(int, float), returns(int)]:
...

which extend foo() with code to enforce input and output types. Further,
function attributes could be added which could be used by tools like
pychecker for intermodule type checking.

Skip
David M. Wilson
2004-03-22 02:17:54 UTC
Permalink
Nicolas Fleury <nid_oizo at yahoo.com_removethe_> wrote...
Personnally, I prefer the "as" syntax to the other proposed (and by a
large margin). However, I feel that it is making the language more
complex and I'm far from sure it's worth the effort.
I'd +1 for the 'as' syntax too, it's more descriptive and feels
'pythonic' (much as the term disgusts me) in its verbosity.

Personally, I think for the meager benefit this new syntax brings, it
appears to be a rather large and incompatible waste of time. With the
exception of syntactic beauty, does this really add anything to
Python? It gives programmers two ways of doing something very basic
indeed. Both proposed syntaxes will inevitably break existing source
analysers, etc.

Even preferring the 'as' syntax, 'def foo(x) as bar' doesn't really
make that much sense to me. staticmethods are wrapper objects and much
better expressed as 'foo = staticmethod(foo)', where you at least know
some kind of layering or transformation is being applied to foo (if
there isn't, why is this person using the same variable name? etc).
With 'as', it suggests some kind of aliasing is taking place, or some
kind of different type of object creation, which isn't the case.

It's also very specific syntax, I'd have hoped big language changes
like this would be reserved for larger, more fundamental, and general
changes that everyone can find useful. Did I say I didn't think it's
worth it already? :)


David.
unknown
2004-03-22 10:35:11 UTC
Permalink
And when did syntactic beauty stop mattering?
Most importantly, the current style separates the unambiguous
information that the method is a static or class method from the "def"
which starts the method's definition, by far too much.
Oh, it's not that big a deal. Anyway, look at the situation with
generator functions. ;-)
Ronald Oussoren
2004-03-22 14:40:24 UTC
Permalink
Ville Vainio <ville at spammers.com> wrote...
The current foo=staticmethod(foo) makes the Python 'staticmethod' seem
like a hack. Many users of staticmethod won't even need to know that
wrapping takes place.
I find myself in diametric opposition here. :)
Users (read: developers) /should/ know how staticmethod is working
under it's skin, that's (and hopefully no-one here disagrees) a bloody
good thing. The fact that defining a static method is a simple
assignment tells the developer a lot more about Python's internal
workings than extra syntax does. It's far more general, it's explicit,
and it's readable.
It's only readable in very small examples, if you have functions that
are longer than 10 lines it is no longer obvious that the assigment has
anything to do with the function above.

To play devil's advocate, 'class Foo: pass' is also syntactic sugar,
having explicit calls to type also tells a developer more about the
inner workings of Python but that is not necessarily a good idea.

Ronald
--
X|support bv http://www.xsupport.nl/
T: +31 610271479 F: +31 204416173
unknown
2004-03-22 07:03:24 UTC
Permalink
The current foo=staticmethod(foo) makes the Python 'staticmethod' seem
like a hack. Many users of staticmethod won't even need to know that
wrapping takes place. It certainly discourages people from using the
feature in the first place.
And when did syntactic beauty stop mattering?
"def foo() as staticmethod" certainly looks best to me aesthetically.
The syntax can be extended, i.e. "def foo() as generator" looks to me
to be a lot more explicit than "def foo()" followed by having the
compiler search the function body for a yield statement in order
to decide if it's a generator.
Greg Ewing (using news.cis.dfn.de)
2004-03-24 05:57:13 UTC
Permalink
Well, yes. It wouldn't have occurred to me to care about the
compiler's pain ;-).
Careful, you'll have enforcement officers from the Society
for Prevention of Cruelty to Compilers knocking on your door.
(And the SPCC sub-contracts the PSU as their enforcement arm,
so watch out.)
--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg
Stephen Horne
2004-03-22 10:48:51 UTC
Permalink
On 21 Mar 2004 23:03:24 -0800, Paul Rubin
The syntax can be extended, i.e. "def foo() as generator" looks to me
to be a lot more explicit than "def foo()" followed by having the
compiler search the function body for a yield statement in order
to decide if it's a generator.
Good point. Though to me, it isn't that it's a pain for the compiler
to search for the 'yield' - I don't care about the compilers pain. The
problem is that *I* have to look for the yield and might not notice
it.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
unknown
2004-03-22 11:02:49 UTC
Permalink
Post by Stephen Horne
Good point. Though to me, it isn't that it's a pain for the compiler
to search for the 'yield' - I don't care about the compilers pain. The
problem is that *I* have to look for the yield and might not notice it.
Well, yes. It wouldn't have occurred to me to care about the
compiler's pain ;-).
Stephen Horne
2004-03-22 11:12:34 UTC
Permalink
On 22 Mar 2004 03:02:49 -0800, Paul Rubin
Post by Stephen Horne
Good point. Though to me, it isn't that it's a pain for the compiler
to search for the 'yield' - I don't care about the compilers pain. The
problem is that *I* have to look for the yield and might not notice it.
Well, yes. It wouldn't have occurred to me to care about the
compiler's pain ;-).
We're such an unfeeling bunch, aren't we ;-)
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Skip Montanaro
2004-03-23 15:24:02 UTC
Permalink
Isaac> I disagree. It's not just the compile which has to search for
Isaac> that yield keyword, we human being reading other's uncommented
Isaac> code (or mis-commented code) also has to do the same. It would
Isaac> do much good if the completely different call convention of
Isaac> generator is made much more explicit in the definition of the
Isaac> function.

You'll be free to write your generator functions like so:

def generator(f):
return f

def silly_counter(n) [generator]:
for i in range(10):
yield n+i

It will just be a convention you adopt though, not a language rule.

Skip
Isaac To
2004-03-23 03:27:57 UTC
Permalink
The syntax can be extended, i.e. "def foo() as generator" looks to me
to be a lot more explicit than "def foo()" followed by having the
compiler search the function body for a yield statement in order to
decide if it's a generator.
Stephen> Good point. Though to me, it isn't that it's a pain for the
Stephen> compiler to search for the 'yield' - I don't care about the
Stephen> compilers pain. The problem is that *I* have to look for the
Stephen> yield and might not notice it.

I disagree. It's not just the compile which has to search for that yield
keyword, we human being reading other's uncommented code (or mis-commented
code) also has to do the same. It would do much good if the completely
different call convention of generator is made much more explicit in the
definition of the function.

Regards,
Isaac.
Stephen Horne
2004-03-23 09:02:40 UTC
Permalink
Post by Isaac To
The syntax can be extended, i.e. "def foo() as generator" looks to me
to be a lot more explicit than "def foo()" followed by having the
compiler search the function body for a yield statement in order to
decide if it's a generator.
Stephen> Good point. Though to me, it isn't that it's a pain for the
Stephen> compiler to search for the 'yield' - I don't care about the
Stephen> compilers pain. The problem is that *I* have to look for the
Stephen> yield and might not notice it.
I disagree. It's not just the compile which has to search for that yield
keyword, we human being reading other's uncommented code (or mis-commented
code) also has to do the same. It would do much good if the completely
different call convention of generator is made much more explicit in the
definition of the function.
Why say "I disagree" if you actually agree with me? As I said...

"""
The problem is that *I* have to look for the yield and might not
notice it.
"""
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
unknown
2004-03-23 23:15:33 UTC
Permalink
Post by Skip Montanaro
return f
yield n+i
It will just be a convention you adopt though, not a language rule.
It's a start. Next would be to add something like perl's -w flag,
which checks that the conventions are followed. Finally there can be
a compiler option that tells the optimizer to assume without checking
that the conventions are followed (i.e. if you say that some arg is an
int, then it's really an int). That should allow generating much
better code which has the possibly of crashing if the convention is broken.
Hung Jung Lu
2004-03-22 21:54:32 UTC
Permalink
"def foo() as staticmethod" certainly looks best to me aesthetically.
The syntax can be extended, i.e. "def foo() as generator" looks to me
to be a lot more explicit than "def foo()" followed by having the
compiler search the function body for a yield statement in order
to decide if it's a generator.
True, but if a method were both generator and static, then we would
have:

def foo() as generator staticmethod:
...

Add another keyword for thread behavior:

def foo() as synchronized generator staticmethod:
....

And another keyword for privacy:

def foo() as private synchronized generator staticmethod:
....

And your language become pretty close to... Java! :)

Or, following C#, you can also specify some attributes:

[attr1=2,
attr2=3
attr3="Hello"]
def foo():
....

-----------------------------------------

Of course there must be some reasonable compromise. I am coming from
the other end of the spectrum: meta-programming. In modern software
development, especially for large and complex systems,
meta-programming becomes more and more essential. If your language is
to grow with time, more and more keywords does not seem to be the way
to go. In some message-based languages like Io, even the if-statements
and the for-loops are not keywords: they are methods.

In the old days, when OOP just appeared, you used to see constructors
with long list of parameters. Nowadays it is much more common to see
things like SetAttribute() to change the behavior of an object. That
is, things are becoming more dynamic.

Metaprogramming is an unavoidable trend, in my opinion. In Java/C# you
use code generators. In C++ you use macros and templates. In Python
the staticmethod() can be interpreted also as a small metaprogramming
statement, not as a declaration of method type.

I am not against the "def foo() as staticmethod" syntax. I am just
bringing up a perspective on possible future problems. It is a little
bit like database table design: you could keep adding new columns to
your table for every new attribute, or you could normalize the table
by allowing a column to specify the attribute name/index, and a
different column for the value. In the first approach, you need to
modify the table definition and your programs when you want to add one
more feature. In the second approach, it becomes easy to add more and
more features, without redefining the table or modifying your program.

regards,

Hung Jung
Skip Montanaro
2004-03-22 22:33:26 UTC
Permalink
"def foo() as staticmethod" certainly looks best to me aesthetically.
The syntax can be extended, i.e. "def foo() as generator" looks to me
to be a lot more explicit than "def foo()" followed by having the
compiler search the function body for a yield statement in order to
decide if it's a generator.
Hung Jung> True, but if a method were both generator and static, then we
Hung Jung> would have:

Hung Jung> def foo() as generator staticmethod:
Hung Jung> ...

Hung Jung> Add another keyword for thread behavior:

Gotta stop thinking of decorators as keywords. That would be a complete
non-starter. It would be both inflexible (need to modify the parser every
time a new one was added) and constraining (require everyone to use the same
small set of "approved" decorators). Decorators are variables referencing
objects which are located at function/method/class definition time.

Hung Jung> def foo() as synchronized generator staticmethod:
Hung Jung> ....

Hung Jung> And another keyword for privacy:

Hung Jung> def foo() as private synchronized generator staticmethod:
Hung Jung> ....

Hung Jung> And your language become pretty close to... Java! :)

And gets fairly unreadable because of the lack of punctuation. I think
square brackets and commas improve readability a bit for those nearly
unreadable long sequences of decorators:

def foo() [private, synchronized, generator, staticmethod]:

Skip
Jacek Generowicz
2004-03-22 10:11:26 UTC
Permalink
The current foo=staticmethod(foo) makes the Python 'staticmethod' seem
like a hack. Many users of staticmethod won't even need to know that
wrapping takes place. It certainly discourages people from using the
feature in the first place.
And when did syntactic beauty stop mattering?
Most importantly, the current style separates the unambiguous
information that the method is a static or class method from the "def"
which starts the method's definition, by far too much.
"def foo() as staticmethod" certainly looks best to me aesthetically.
My only slight reservation is the increased "keyword overloading". For
example ... "static" has something like 7 distinct meanings in C++ (at
last count), and I find that this has a detremental effect on
discussions with colleagues. I hope that Python will be able to avoid
such problems.
Stephen Horne
2004-03-22 11:10:18 UTC
Permalink
On 22 Mar 2004 11:11:26 +0100, Jacek Generowicz
Post by Jacek Generowicz
My only slight reservation is the increased "keyword overloading". For
example ... "static" has something like 7 distinct meanings in C++ (at
last count), and I find that this has a detremental effect on
discussions with colleagues. I hope that Python will be able to avoid
such problems.
Which is worse? Keyword overloading, or wierd pattern-of-symbols
overloading?

Personally, I'd hate to see nouns or verbs 'overloaded'. But why
shouldn't prepositions be used in a wide variety of contexts in
Python? They are in spoken languages, after all.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Ville Vainio
2004-03-22 06:58:06 UTC
Permalink
David> Personally, I think for the meager benefit this new syntax
David> brings, it appears to be a rather large and incompatible
David> waste of time. With the exception of syntactic beauty, does
David> this really add anything to

The current foo=staticmethod(foo) makes the Python 'staticmethod' seem
like a hack. Many users of staticmethod won't even need to know that
wrapping takes place. It certainly discourages people from using the
feature in the first place.

And when did syntactic beauty stop mattering?
--
Ville Vainio http://tinyurl.com/2prnb
David M. Wilson
2004-03-22 12:51:58 UTC
Permalink
Ville Vainio <ville at spammers.com> wrote...
The current foo=staticmethod(foo) makes the Python 'staticmethod' seem
like a hack. Many users of staticmethod won't even need to know that
wrapping takes place.
I find myself in diametric opposition here. :)

Users (read: developers) /should/ know how staticmethod is working
under it's skin, that's (and hopefully no-one here disagrees) a bloody
good thing. The fact that defining a static method is a simple
assignment tells the developer a lot more about Python's internal
workings than extra syntax does. It's far more general, it's explicit,
and it's readable.

I can't see at all how it can be considered a hack.

If at some future date, staticmethod becomes intrinsicly linked to the
Python core in some magical way, then I can see an argument for extra
syntax, but as it stands, staticmethod is a wrapper, defining a
staticmethod is equivalent to "foo = staticmethod(foo)", and "def foo
as staticmethod" is in my books rather ambiguous.
It certainly discourages people from using the
feature in the first place.
Again, I can't see how.
And when did syntactic beauty stop mattering?
Don't get me wrong, I love lovely looking code, but I don't like code
that wears makeup, which is what this is. We can already do this via
another, simple, more descriptive means.


David.
Skip Montanaro
2004-03-24 15:07:24 UTC
Permalink
The syntax in PEP318 is syntactic sugar for the most common use of
function decorators, the current syntax would still be supported.
Jess> That's good to hear. Should this be made explicit in the PEP?

It's not really necessary, since the "current syntax" is just a normal
Python function call.

Skip
Nicolas Fleury
2004-03-22 02:16:05 UTC
Permalink
I personally think that self not only should be a keyword,
it should not be among the parameters at all. However, I
seem to be in a distinct minority about that.
But how could that be changed/introduced?
However, I think you've got class methods mixed up with
static methods. Class methods require the first parameter
to be the class (sometimes spelled klas for some reason),
while static methods are the ones that don't have either an
instance or a class as the first parameter.
Ok, I wrote static method everywhere in my message and changed for class
method after reading the PEP... so my first idea was right;)

Regards,

Nicolas
Jacek Generowicz
2004-03-22 09:52:11 UTC
Permalink
I've given some Python courses, and the first reaction when showing
a class with some methods is something like "so I guess when the
first parameter is not named 'self' it makes a class method?".
Just as a point of information:

I teach Python courses ... hands-on interactive style ... class size =
12 ... I've given 8 of these so far ... I have _never_ had this
reaction. (80% of the students have previous C++ and/or Java
experience, and usually someone asks "How do I do class/static
methods?".)

Maybe the reason nobody believes this is because, fairly early on in
the treatment of classes, I point out that there is nothing special
about the name "self", but I usually ask the students what they make
of it, before offering any of my own comments.
If easing the creation of class methods is so important, I would
prefer a more radical approach with a end result that would be more
While I agree that there is a lot to be said for Python being guided
by the principle of least surprise, I would argue against letting
Python be shaped by the expectations of people whose expectations have
been moulded by Java and C++, as those expectations are typically
fundamentally flawed, and certainly inappropriate to Python. (For
example, most (Java and C++ people, and hence generally most) people
expect there to be access control, and hundreds of getters and setters
all over the place. We do _not_ want this in Python. Etc., etc..
- Give a warning for all methods with first parameter not named "self"
in next versions of Python.
That would be acceptable ...
- In a future major version of Python, 3 or 4, self becomes a keyword
and a first parameter named otherwise implies a class method (I
understand it could mean a lot of changes in code not using self).
... that would (IMHO) not.
Jacek Generowicz
2004-03-22 15:56:43 UTC
Permalink
"Jacek Generowicz" <jacek.generowicz at cern.ch> wrote in message
Post by Jacek Generowicz
I teach Python courses ...
[...]
Post by Jacek Generowicz
fairly early on in the treatment of classes, I point out that
there is nothing special about the name "self", but I usually ask
the students what they make of it, before offering any of my own
comments.
Excellent class design! When I was doing classes, I found
one of the most important questions I could ask myself in
the after-the-class analysis was "what could I have said
earlier that would have dealt with this question / confusion?"
[...]
Post by Jacek Generowicz
- Give a warning for all methods with first parameter not named "self"
in next versions of Python.
That would be acceptable ...
Only if the intention is to move to making it a built-in. Otherwise there
really are people who use some other name than self (usually to save
keystrokes.)
... just after I say that there is nothing special about "self" (in my
Python courses), I plead that, even though you are allowed to use any
name you like, you should use "self", and nothing but "self", on the
grounds that this will ensure that other Python programmers (which
includes yourself at a later date) will immediately understand what
you are doing; while you are likely to cause lots of confusion if you
use anything other than "self". At this point I usually re-iterate
that clearly showing your intentions, is very much good Python
style. (If I'm feeling in "that sort of mood" I might even point out
that if you want to save keystrokes, then you know where you can find
Perl.)

If a warning such as Nicolas proposes were introduced, I hope that it
would be suppressable. If so, I suspect that it would be a very useful
feature. I think that making the name "self" obligatory would run very
much against some of the basic Python philosophies.
Skip Montanaro
2004-03-22 15:14:17 UTC
Permalink
- Give a warning for all methods with first parameter not named "self"
in next versions of Python.
Jacek> That would be acceptable ...

I suggest you see if PyChecker already does this.
- In a future major version of Python, 3 or 4, self becomes a keyword
and a first parameter named otherwise implies a class method (I
understand it could mean a lot of changes in code not using self).
Jacek> ... that would (IMHO) not.

(I'm not picking on Jacek here. I just chose to reply to his post.)

Everybody seems to be fixated on how best to spell "make this function a
class or static method". If that was all we were interested in, something
more syntacically restrictive would be fine. That's not all PEP 318 allows
though. It proposes syntax (and eventually semantics) for a general
function decorator capability.

I almost wish Python didn't already have staticmethod() and classmethod()
builtins so people wouldn't get so hung up on the most economical way to
spell them.

I suggest people google for

site:mail.python.org python-dev PEP 318

and look at some of the other ideas people have had or examples proposed of
how to (ab)use this feature. It's applicability is much broader than just
declaring methods to be "static" or "class".

Skip
Skip Montanaro
2004-03-23 15:46:31 UTC
Permalink
Michele> And don't forget Ville Vanio's idea of using the new syntax to
Michele> implement multimethods:

Michele> def __mul__(self,other) as multimethod(Matrix,Matrix):
Michele> ...

Michele> def __mul__(self,other) as multimethod(Matrix,Vector):
Michele> ...

Michele> def __mul__(self,other) as multimethod(Matrix,Scalar):
Michele> ...

Michele> def __mul__(self,other) as multimethod(Vector,Vector):
Michele> ...

Michele> def __mul__(self,other) as multimethod(Vector,Scalar):
Michele> ...

Michele> etc.

Michele> Way cool, actually :)

Okay, but can you explain the mechanism or point me to the original post? I
can't find it on Google (probably too recent). Multimethod(a,b)() won't
know that each call is for __mul__ will it (maybe it will peek at the
func_name attribute)? From what you posted, all I saw was multiple
definitions of __mul__. Only the last one will be present as a method in
the class's definition.

Skip
John Roth
2004-03-21 20:29:48 UTC
Permalink
"Nicolas Fleury" <nid_oizo at yahoo.com_removethe_> wrote in message
def foo(x, y) as staticmethode: pass
Define foo with arguments x and y as staticmethode.
Is Python now C++? Mabe I miss something why this syntax is wrong.
Personnally, I prefer the "as" syntax to the other proposed (and by a
large margin). However, I feel that it is making the language more
complex and I'm far from sure it's worth the effort. I've given some
Python courses, and the first reaction when showing a class with some
methods is something like "so I guess when the first parameter is not
named 'self' it makes a class method?". So I have to explain it's not
the case, etc.
If easing the creation of class methods is so important, I would prefer
a more radical approach with a end result that would be more intuitive
- Give a warning for all methods with first parameter not named "self"
in next versions of Python.
- In a future major version of Python, 3 or 4, self becomes a keyword
and a first parameter named otherwise implies a class method (I
understand it could mean a lot of changes in code not using self).
Regards,
Nicolas
I personally think that self not only should be a keyword,
it should not be among the parameters at all. However, I
seem to be in a distinct minority about that.

However, I think you've got class methods mixed up with
static methods. Class methods require the first parameter
to be the class (sometimes spelled klas for some reason),
while static methods are the ones that don't have either an
instance or a class as the first parameter.

John Roth
Jacek Generowicz
2004-03-22 10:04:30 UTC
Permalink
I personally think that self not only should be a keyword,
it should not be among the parameters at all. However, I
seem to be in a distinct minority about that.
The existence of C++ or Java coding guidelines which advocate the
universal use of this->member or the use of m_member for all member
data and function names, is (to me) evidence of the necessity of self.

Also, ask an average[*] C++ programmer whether the following functions
have the same type:

void A::foo(void);
void B::foo(void);

(where A and B are both classes).

In my experience[+], they will, typically, be adamant that the types
are identical. If they have been exposed to Python, then you have more
than a fair chance that they will understand that the types are, in
fact, different.

Python's explicit passing of self makes people understand what is
going on, much better ... and I think that is a very valuable thing.


[*] And we all know just how dangerous "average" C++ programmers are.

[+] You probably don't want to know why I have had ample opportunity
to ask this question, in real life.
Ville Vainio
2004-03-24 19:58:16 UTC
Permalink
Something just occured to me as far as multimethods go - would it be
beneficial for there to exist only one, global-all-the-way multimethod
dispatcher? This way declaring new multimethods in a module would
register themselves with the same global dispatcher that all the other
modules use. I.e.:

# module spam

def foo(x) [multimethod(str)]:
pass

# module eggs

import spam

def foo(x) [multimethod(int)]:
pass

foo("hello") # calls foo in 'spam'

This would improve opportunities for creating multimethod-driven
frameworks that could e.g. specify that 'validate_request' multimethod
will be called with all the requests received, so the user should implement

def validate_request(req) [multimethod(DisconnectRequest)]:
return True

if he chooses to acknowledge the request.

(I know this is a contorted example and better solutions already
exists, so no need to point it out).

The global-all-the-way approach seems a bit out of place in usually so
modular Python. It might be more desirable to provide utility
functions to choose a 'dispatcher-space', like this:

select_dispatcher("ultrawidget_multimethods")

# all the multimethod declarations now add their signatures to this
# dispatcher, and all the mm invocations search for signatures in this space

foo = import_multimethod("ultrawidget_multimethods", "foo")

# now you can call foo(42, "blah") without actually declaring a new
# multimethod in the current module. Otherwise the only way to bind
# 'foo' to the dispatcher would be declaring at least one multimethod.


In the modular, more Pythonic way, adding new multimethods to the same
dispatcher would require doing something like:

def foo(x) [multimethod_with_dispatcher(moduleb, str)]:
pass

where foo would be registered with the multimethod dispatcher of moduleb.

All this is probably too complicated or "different" to bundle in
stdlib (even in a hypothetical handy_demo_decorators module), but it
might be a fun third-party module for someone to hack. Even plain
one-module dispatching would be a useful demo.
--
Ville Vainio http://tinyurl.com/2prnb
Andrew Bennetts
2004-03-25 00:21:19 UTC
Permalink
Andrew> How is the registry not going to help distinguish between the
Andrew> foo methods from C, and the foo methods from C2?
By keying the dispatch table on (foo, Matrix) or (foo, Long). Note that
your classes (C and C2) make no sense given that the first parameter to the
multimethod() calls are Matrix and Long, respectively.
The first parameter thing is just a result of the confusion between myself
and Ville about whether we were discussing methods or functions... I thought
it was odd too, but I was trying to adapt the examples given earlier in the
thread as faithfully as I could, but I didn't realise Ville was talking
about functions, not methods. Imagine my examples with the first argument
to multimethod removed, i.e.:

class C:
def foo(self, other):
...
foo = multimethod(Matrix)(foo)

def foo(self, other):
...
foo = multimethod(Vector)(foo)

class C2:
def foo(self, other):
...
foo = multimethod(String)(foo)

I still don't get how a implementation of multimethod using a global
dispatch table can support all of these calls correctly:

c, c2 = C(), C2()
c.foo(some_matrix) # This is fine
c.foo(some_vector) # This is fine too
c.foo(some_string) # This should fail!
c2.foo(some_matrix) # This should also fail!
c2.foo(some_string) # This is fine

Well, maybe you can do it by stack introspection in the multimethod call to
determine what class definition you're in, but that's vile.

-Andrew.
AdSR
2004-03-22 19:25:01 UTC
Permalink
Nicolas Fleury <nid_oizo at yahoo.com_removethe_> wrote...
def foo(x, y) as staticmethode: pass
Define foo with arguments x and y as staticmethode.
Is Python now C++? Mabe I miss something why this syntax is wrong.
Personnally, I prefer the "as" syntax to the other proposed (and by a
large margin). However, I feel that it is making the language more
complex and I'm far from sure it's worth the effort. I've given some
[snip
If easing the creation of class methods is so important, I would prefer
Just done a quick check:

$ find /lib/python2.3/ -name "*.py" -exec egrep
"(class|static)method\W" {} ';' | wc --lines
49

Not all of them are calls, and all this in 773 *.py files. I guess
this says something about importance...

Cheers,

AdSR
Skip Montanaro
2004-03-22 19:56:07 UTC
Permalink
If easing the creation of class methods is so important, I would prefer
AdSR> Just done a quick check:

AdSR> $ find /lib/python2.3/ -name "*.py" -exec egrep
AdSR> "(class|static)method\W" {} ';' | wc --lines
AdSR> 49

AdSR> Not all of them are calls, and all this in 773 *.py files. I guess
AdSR> this says something about importance...

Not necessarily. Certainly there's the fact that class and static methods
will be less frequently used than normal methods. However, there are also
another mitigating factors as I see it, neither classmethod() nor
staticmethod() have been around all that long (2.2 timeframe).

I will reiterate my comment from before: PEP 318 is about more than just
static and class methods. Here are a few examples from the python-dev
discussion.

1. From Fred Drake:

As an (admittedly trivial) example, I'd be quite happy for:

class Color [valuemap]:
red = rgb(255, 0, 0)
blue = rgb(0, 255, 0)
green = rgb(0, 0, 255)

to cause the name Color to be bound to a non-callable object. Why must
the decorators be required to return callables? It will not make sense
in all circumstances when a decorator is being used.

2. From Anders Munch:

Given that the decorator expression can be an arbitrary Python
expression, it _will_ be used as such. For example:

def foo(arg1, arg2) as release(
version="1.0",
author="me",
status="Well I wrote it so it works, m'kay?",
warning="You might want to use the Foo class instead"):

3. From Shane Hathaway:

Ooh, what about this:

def singleton(klass):
return klass()

class MyThing [singleton]:
...

That would be splendid IMHO.

There are plenty of other examples. Browse the archives.

think-outside-the-bun-(tm)-ly, y'rs,

Skip
Andrew Bennetts
2004-03-24 12:41:03 UTC
Permalink
Andrew> But the decorator syntax doesn't help with this case at
Andrew> all.
Andrew> You *could* hack up multimethods today, though, by abusing
Hmm? I would have figured that once you are able to associate
parameter types with a function name and a concrete callable the gets
called, you can implement multimethods.
The wrapper would essentially
1. Look up the function name N from func_name
2. Insert the mapping of parameter types to the callable in the
data structure that holds the mappings for multimethod N
3. Return the dispatcher object that remembers it's a dispatcher for
multimethod N, so, when called, it can search the data structure
for its own multimethod for the concrete callable to invoke with
the parameters.
Decorators don't add any new capabilities to python, just a more convenient
spelling for existing features.

So, your proposal would work equally well (functionally, anyway, just not
look as nice) like this:

class C:
def foo(self, other):
...
foo = multimethod(Matrix, Matrix)(foo)

def foo(self, other):
...
foo = multimethod(Matrix, Vector)(foo)

Which is to say, I don't understand how that can work -- the first
definition of foo gets clobbered.

You *could* have a global registry that's keyed off func_name, as you
suggest, but that doesn't work in general... what if I later have:

class C2:
def foo(self, other):
...
foo = multimethod(Long, String)(foo)

How is the registry not going to help distinguish between the foo methods
from C, and the foo methods from C2?
This would obviously be used for CLOS-style multimethods, i.e. the
concrete methods would be plain functions. This wouldn't work for
operator overloading, for example, unless all the dispatcable classes
return mm_multiply(self,other)
I don't understand why __mul__ is any different to foo here.
Again, I might be overlooking something, but I haven't yet figured out
what :).
I might be missing something too, but I don't yet see what :)

-Andrew.
Skip Montanaro
2004-03-24 15:44:09 UTC
Permalink
Andrew> So, your proposal would work equally well (functionally, anyway,
Andrew> just not look as nice) like this:

Andrew> class C:
Andrew> def foo(self, other):
Andrew> ...
Andrew> foo = multimethod(Matrix, Matrix)(foo)

Andrew> def foo(self, other):
Andrew> ...
Andrew> foo = multimethod(Matrix, Vector)(foo)

Andrew> Which is to say, I don't understand how that can work -- the
Andrew> first definition of foo gets clobbered.

Andrew> You *could* have a global registry that's keyed off func_name,
Andrew> as you suggest, but that doesn't work in general... what if I
Andrew> later have:

Andrew> class C2:
Andrew> def foo(self, other):
Andrew> ...
Andrew> foo = multimethod(Long, String)(foo)

Andrew> How is the registry not going to help distinguish between the
Andrew> foo methods from C, and the foo methods from C2?

By keying the dispatch table on (foo, Matrix) or (foo, Long). Note that
your classes (C and C2) make no sense given that the first parameter to the
multimethod() calls are Matrix and Long, respectively.

Skip
Skip Montanaro
2004-03-24 15:45:45 UTC
Permalink
Ville> The dispatcher obviously needs a reference to a module global
Ville> dict object of some sort, with some magical name like
Ville> '__mm_dispatcher_registry'.

More cleanly, multimethod is a callable instance.

Skip
Michele Simionato
2004-03-24 05:04:44 UTC
Permalink
Okay, but can you explain the mechanism or point me to the original post? I
can't find it on Google (probably too recent). Multimethod(a,b)() won't
know that each call is for __mul__ will it (maybe it will peek at the
func_name attribute)? From what you posted, all I saw was multiple
definitions of __mul__. Only the last one will be present as a method in
the class's definition.
Skip
Please take what I posted just as an idea (it is not even my idea!), not as
an implementation proposal.
I don't have an implementation, but I am pretty much convinced that it is
possible and not that hard.
I am not suggesting that we put multimethods in Python 2.4.
I am just noticing that the notation could be used to denote
multimethods too, as Ville Vainio suggested (sorry for the mispelling,
Ville!).

Just to support your point that the decorator idea is a Pandora box, and
we can extract anything from it ;)

Michele Simionato
Michele Simionato
2004-03-25 08:29:18 UTC
Permalink
Still, I'd like to follow this thread just a little further, mostly just to
see where it goes. <snip>
As Ville Vainio pointed out in another thread, maybe the source of
the confusion is that we have in mind CLOS-style multimethods,
implemented as generic functions *outside* classes.
Here is an example, taken from the Goops manual,

http://www.gnu.org/software/guile/docs/goops/Methods.html#Methods

(define-method (+ (x <string>) (y <string>))
(string-append x y))

(+ 1 2) --> 3
(+ "abc" "de") --> "abcde"

In Guile "+" is a generic function and the define-method macro just
adds another method to the list of methods known by "+".
Yes, there is a little ambiguity since typically "def" means "define"
and not "add to the definition", but since practicality beats purity ...


Michele
Ville Vainio
2004-03-24 18:28:30 UTC
Permalink
Skip> always be the class). Hmmm... t1 presents a problem since
Skip> at the time multimethod(Matrix,Matrix) is called there is no
Skip> Matrix class available (it hasn't been bound yet). You'd
Skip> have to fudge that and use strings as the type names. That
Skip> presents another problem. Classes, unlike functions,

I'll reiterate here my view that putting the multimethods inside
classes is not really all that important. CLOS doesn't do that, for
one (And Lisp people certainly seem to be quite happy with that
approach :-). You can just do:

def multiply(self,other) [multimethod(Matrix, Vector)]:
pass

after the classes have been created. The code within the function uses
self just like it would in a class.

With classes you get the advantage of having the names hidden in the
class namespace, obviously. But here I think the advantages of the
function-based approach outweigh the disadvantages.

We could also specify that the first type listed is always the class
into which the multimethod should be injected, and do the injecting in
the decorator. It would look alien in python code to find the function
in a namespace different from where it was deffed, though.

FWIW, I'm gradually getting really excited about these wrapper
thingies; they seem to create a new dimension of pythonism, despite
the fact that they really bring nothing new to the table. I sense the
coming of a new success story a'la List Comprehensions.
--
Ville Vainio http://tinyurl.com/2prnb
Andrew Bennetts
2004-03-24 13:51:19 UTC
Permalink
Post by Skip Montanaro
Andrew> Which is to say, I don't understand how that can work --
Andrew> the first definition of foo gets clobbered.
Yes, the name 'foo' gets clobbered every time, but the decorator
always returns the *dispatcher* that is capable of dispatching the
call to the correct definition of 'foo'. The dispatcher preserves the
references to the callables, so they don't get decreffed out of
existence.
The dispatcher obviously needs a reference to a module global dict
object of some sort, with some magical name like
'__mm_dispatcher_registry'.
[...]
Post by Skip Montanaro
Andrew> How is the registry not going to help distinguish between
Andrew> the foo methods from C, and the foo methods from C2?
It's not. I'm still thinking of multimethods mostly as plain functions
with dynamic dispatching on arg types, not actual methods. To get the
functionality in methods, you just need to call the mm function
explicitly with an unique name inside the method.
Ah, I see. I was thinking of methods, but you were thinking of functions.

But the same problem still applies: how would the multimethod decorator
distinguish between a "foo" function in moduleA, and a "foo" function in
moduleB? By inspecting the function's __module__ attribute? This degree of
introspection feels a bit too magical (i.e. implicit) to me...

-Andrew.
Ville Vainio
2004-03-24 09:45:05 UTC
Permalink
Andrew> But the decorator syntax doesn't help with this case at
Andrew> all.

Andrew> You *could* hack up multimethods today, though, by abusing
Andrew> metaclasses:

Hmm? I would have figured that once you are able to associate
parameter types with a function name and a concrete callable the gets
called, you can implement multimethods.

The wrapper would essentially

1. Look up the function name N from func_name

2. Insert the mapping of parameter types to the callable in the
data structure that holds the mappings for multimethod N

3. Return the dispatcher object that remembers it's a dispatcher for
multimethod N, so, when called, it can search the data structure
for its own multimethod for the concrete callable to invoke with
the parameters.

This would obviously be used for CLOS-style multimethods, i.e. the
concrete methods would be plain functions. This wouldn't work for
operator overloading, for example, unless all the dispatcable classes
did a:

class C:
def __mul__(self, other):
return mm_multiply(self,other)


Where mm_multiply would be the multimethod. Perhaps a base class could
be introduced that did this for all the operators...

Again, I might be overlooking something, but I haven't yet figured out
what :).
--
Ville Vainio http://tinyurl.com/2prnb
Ville Vainio
2004-03-24 13:25:21 UTC
Permalink
Andrew> Decorators don't add any new capabilities to python, just
Andrew> a more convenient spelling for existing features.

I know, the decorator syntax just makes these look so much nicer.

Andrew> Which is to say, I don't understand how that can work --
Andrew> the first definition of foo gets clobbered.

Yes, the name 'foo' gets clobbered every time, but the decorator
always returns the *dispatcher* that is capable of dispatching the
call to the correct definition of 'foo'. The dispatcher preserves the
references to the callables, so they don't get decreffed out of
existence.

The dispatcher obviously needs a reference to a module global dict
object of some sort, with some magical name like
'__mm_dispatcher_registry'.

Andrew> You *could* have a global registry that's keyed off
Andrew> func_name, as you suggest, but that doesn't work in
Andrew> general... what if I later have:

Andrew> class C2:
Andrew> def foo(self, other):
Andrew> ...
Andrew> foo = multimethod(Long, String)(foo)

Andrew> How is the registry not going to help distinguish between
Andrew> the foo methods from C, and the foo methods from C2?

It's not. I'm still thinking of multimethods mostly as plain functions
with dynamic dispatching on arg types, not actual methods. To get the
functionality in methods, you just need to call the mm function
explicitly with an unique name inside the method.

I'm not sure whether methods could be dispatched more automatically,
someone might have some ideas?

Andrew> I don't understand why __mul__ is any different to foo here.

Well, it's just that __mul__ absolutely needs to be a method in order
to be called on seeing an * operator. 'foo' can be taken out of the
class.
--
Ville Vainio http://tinyurl.com/2prnb
Terry Reedy
2004-03-23 14:34:06 UTC
Permalink
"Michele Simionato" <michele.simionato at poste.it> wrote in message
[et cetara]

This PEP proposes somewhat implicit declarative syntax in the function
heading as a substitute (but not replacement) for an explicit procedural
call after the function call. This is similar to the doc string rule, and
just as 'unnecessary'. However, if instead of writing

def f():
'f: a function to illustrate a point'
pass

we had to write

def f():
pass

f.__doc__ = 'f: a function to illustrate a point'

(as we now can), I suspect there would be fewer doc strings written. Even
if not, they would be less useful to readers. I see the point of the PEP
at putting information about a function at the top so readers can more easy
see it before reading the body, if indeed they need to read the
implementation detail. So I suspect the result of this PEP would be more
functional transformations than at present. It certainly emphasizes the
point that Python functions are first-class objects which can be
manipulated and wrapped just like other objects.

I consider this a positive, but have no strong opinion on '(args) [deco]'
versus 'as deco'.

Terry J. Reedy
Michele Simionato
2004-03-23 07:52:11 UTC
Permalink
I will reiterate my comment from before: PEP 318 is about more than just
static and class methods. Here are a few examples from the python-dev
discussion.
red = rgb(255, 0, 0)
blue = rgb(0, 255, 0)
green = rgb(0, 0, 255)
to cause the name Color to be bound to a non-callable object. Why must
the decorators be required to return callables? It will not make sense
in all circumstances when a decorator is being used.
Ok.
Given that the decorator expression can be an arbitrary Python
def foo(arg1, arg2) as release(
version="1.0",
author="me",
status="Well I wrote it so it works, m'kay?",
Nice,
return klass()
...
That would be splendid IMHO.
This one I don't like (not the syntax, the implementation).
I want my singleton to be a class I can derive from.
This can be done by using a metaclass as decorator
(I posted an example months ago).
There are plenty of other examples. Browse the archives.
think-outside-the-bun-(tm)-ly, y'rs,
Skip
And don't forget Ville Vanio's idea of using the new syntax to
implement multimethods:

def __mul__(self,other) as multimethod(Matrix,Matrix):
...

def __mul__(self,other) as multimethod(Matrix,Vector):
...

def __mul__(self,other) as multimethod(Matrix,Scalar):
...

def __mul__(self,other) as multimethod(Vector,Vector):
...

def __mul__(self,other) as multimethod(Vector,Scalar):
...

etc.

Way cool, actually :)


Michele Simionato
Stephen Horne
2004-03-22 19:40:11 UTC
Permalink
Post by AdSR
$ find /lib/python2.3/ -name "*.py" -exec egrep
"(class|static)method\W" {} ';' | wc --lines
49
Not all of them are calls, and all this in 773 *.py files. I guess
this says something about importance...
Actually, given that support for staticmethod and classmethod is very
recent, it is rather surprising that it appears in that many places.

I wouldn't expect existing libraries to be updated just to make use of
'staticmethod' or whatever. New features mainly get used in new code,
and even then there is the drag factor as people take time to adopt
them.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
JanC
2004-03-24 01:34:28 UTC
Permalink
To me, the 'as' syntax is misleading. It implies that the syntax is
asserting a type, style or similar attribute of the function. That may
be the expected normal use, with 'staticmethod' etc, but my
understanding is that it is meant to be more general than that. You
can apply any translation or any sequence of translations that you
choose.
Maybe "def foo() with [list]" (or "def foo() using [list]"?) is more clear
then?
--
JanC

"Be strict when sending and tolerant when receiving."
RFC 1958 - Architectural Principles of the Internet - section 3.9
unknown
2004-03-22 12:40:48 UTC
Permalink
You are not supposed to be looking for it, IMHO.
Generators follow the Sequence protocol and are to be
treated as sequences. Thus, you should simply make it
clear in the
function-name/conventions/interface-documentation that
the function returns a sequence. Whether that
sequence is implemented via a generator or not is an
implementation detail.
But that's bogus. Python is dynamically typed which means that a
normal function can return whatever it wants, sequence or
non-sequence. A generator function can't return anything, it can only
yield.
Stephen Horne
2004-03-22 19:34:35 UTC
Permalink
On Mon, 22 Mar 2004 08:43:08 -0800, David Eppstein
In article <7xn069t5pr.fsf at ruckus.brouhaha.com>,
Post by unknown
You are not supposed to be looking for it, IMHO.
Generators follow the Sequence protocol and are to be
treated as sequences. Thus, you should simply make it
<snip>
Post by unknown
But that's bogus. Python is dynamically typed which means that a
normal function can return whatever it wants, sequence or
non-sequence. A generator function can't return anything, it can only
yield.
I agree with Eyal. A generator is a callable that, when called, returns
an iterator of the items it generates. The fact that it's defined using
yield syntax instead of using return syntax is irrelevant to the caller.
I agree, but I'd still prefer it if the difference between a generator
and a function was explicit in the definition.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
David Eppstein
2004-03-22 16:43:08 UTC
Permalink
In article <7xn069t5pr.fsf at ruckus.brouhaha.com>,
Post by unknown
You are not supposed to be looking for it, IMHO.
Generators follow the Sequence protocol and are to be
treated as sequences. Thus, you should simply make it
clear in the
function-name/conventions/interface-documentation that
the function returns a sequence. Whether that
sequence is implemented via a generator or not is an
implementation detail.
But that's bogus. Python is dynamically typed which means that a
normal function can return whatever it wants, sequence or
non-sequence. A generator function can't return anything, it can only
yield.
I agree with Eyal. A generator is a callable that, when called, returns
an iterator of the items it generates. The fact that it's defined using
yield syntax instead of using return syntax is irrelevant to the caller.
--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
Robert Brewer
2004-03-21 20:59:29 UTC
Permalink
Class methods require the first parameter
to be the class (sometimes spelled klas for some reason),
Because "class" IS a reserved word. I tend to use "cls" instead of
"klas", but I'm not Dutch. ;)


FuManChu
Eyal Lotem
2004-03-22 11:04:41 UTC
Permalink
Post by Stephen Horne
Good point. Though to me, it isn't that it's a pain
for the compiler
to search for the 'yield' - I don't care about the
compilers pain. The
problem is that *I* have to look for the yield and
might not notice
it.
You are not supposed to be looking for it, IMHO.
Generators follow the Sequence protocol and are to be
treated as sequences. Thus, you should simply make it
clear in the
function-name/conventions/interface-documentation that
the function returns a sequence. Whether that
sequence is implemented via a generator or not is an
implementation detail.


__________________________________
Do you Yahoo!?
Yahoo! Finance Tax Center - File online. File on time.
http://taxes.yahoo.com/filing.html
Peter Hansen
2004-03-22 12:33:08 UTC
Permalink
On Sun, 21 Mar 2004 17:53:11 +0100, Marco Bubke <marco at bubke.de>
are better but still I used to see [] as list
Surely it *is* a list - a list of modifier functions to apply to the
original function.
If it's a list, then just as surely a tuple should be allowed in its
place, as in every (?) other such case in Python that I can think of.
Would that be part of the proposal? (I haven't read the full PEP, sorry.)

(I know, "foolish consistency is the hobgoblin blah blah..." :-)

-Peter
Stephen Horne
2004-03-22 19:13:33 UTC
Permalink
On Mon, 22 Mar 2004 07:33:08 -0500, Peter Hansen <peter at engcorp.com>
Post by Peter Hansen
Surely it *is* a list - a list of modifier functions to apply to the
original function.
If it's a list, then just as surely a tuple should be allowed in its
place, as in every (?) other such case in Python that I can think of.
Would that be part of the proposal? (I haven't read the full PEP, sorry.)
I don't think tuples were mentioned in the PEP, though I only skimmed
through it quickly. But, well...
Post by Peter Hansen
(I know, "foolish consistency is the hobgoblin blah blah..." :-)
Yes, I have to agree.

The list-like syntax isn't too confusing in this context. A
tuple-based context would be. With parens, it would look a lot like
having too lists of arguments. Without, well, IMO it'd be even worse.

Which still shouldn't be read as me expressing a preference - the 'as'
syntax is roughly equally good IMO.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Greg Ewing (using news.cis.dfn.de)
2004-03-24 06:03:48 UTC
Permalink
You are not supposed to be looking for it, IMHO.
Generators follow the Sequence protocol and are to be
treated as sequences. Thus, you should simply make it
clear in the
function-name/conventions/interface-documentation that
the function returns a sequence. Whether that
sequence is implemented via a generator or not is an
implementation detail.
That may be true when you're *using* the generator, but
when looking at the implementation, it can be helpful
if you're warned ahead of time that you're wading into
generator code, to get you into the right frame of
mind.

I had that experience just the other day -- started
reading a function, had some moments of wondering
"how does this work?", then thought "oh, I know, I
bet this is a generator"... skimmed through it, and
sure enough, there was a 'yield'.
--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg
David Eppstein
2004-03-24 18:19:44 UTC
Permalink
In article <c3r8c6$2b5g98$1 at ID-169208.news.uni-berlin.de>,
Post by Greg Ewing (using news.cis.dfn.de)
You are not supposed to be looking for it, IMHO.
Generators follow the Sequence protocol and are to be
treated as sequences. Thus, you should simply make it
clear in the
function-name/conventions/interface-documentation that
the function returns a sequence. Whether that
sequence is implemented via a generator or not is an
implementation detail.
That may be true when you're *using* the generator, but
when looking at the implementation, it can be helpful
if you're warned ahead of time that you're wading into
generator code, to get you into the right frame of
mind.
I had that experience just the other day -- started
reading a function, had some moments of wondering
"how does this work?", then thought "oh, I know, I
bet this is a generator"... skimmed through it, and
sure enough, there was a 'yield'.
I think this can be alleviated by following a convention of using
"Yields" as the first word of the docstring for all simple generators.
E.g., http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/117119

Of course, that doesn't help for code not following the convention...
--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
Stephen Horne
2004-03-22 10:23:02 UTC
Permalink
On Sun, 21 Mar 2004 17:53:11 +0100, Marco Bubke <marco at bubke.de>
are better but still I used to see [] as list
Surely it *is* a list - a list of modifier functions to apply to the
original function.

To me, the 'as' syntax is misleading. It implies that the syntax is
asserting a type, style or similar attribute of the function. That may
be the expected normal use, with 'staticmethod' etc, but my
understanding is that it is meant to be more general than that. You
can apply any translation or any sequence of translations that you
choose.

Not that I care either way, just being pointlessly pedantic really.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Nicolas Fleury
2004-03-21 19:45:39 UTC
Permalink
def foo(x, y) as staticmethode: pass
Define foo with arguments x and y as staticmethode.
Is Python now C++? Mabe I miss something why this syntax is wrong.
Personnally, I prefer the "as" syntax to the other proposed (and by a
large margin). However, I feel that it is making the language more
complex and I'm far from sure it's worth the effort. I've given some
Python courses, and the first reaction when showing a class with some
methods is something like "so I guess when the first parameter is not
named 'self' it makes a class method?". So I have to explain it's not
the case, etc.

If easing the creation of class methods is so important, I would prefer
a more radical approach with a end result that would be more intuitive
to newcomers:
- Give a warning for all methods with first parameter not named "self"
in next versions of Python.
- In a future major version of Python, 3 or 4, self becomes a keyword
and a first parameter named otherwise implies a class method (I
understand it could mean a lot of changes in code not using self).

Regards,

Nicolas
John Roth
2004-03-21 18:16:04 UTC
Permalink
"Marco Bubke" <marco at bubke.de> wrote in message
Hi
I have read some mail on the dev mailing list about PEP 318 and find the
new
Syntax really ugly.
There doesn't seem to be a really beautiful
syntax for this. That's the reason it wasn't
in 2.2.

My personal opinion is that I'll take whatever
the core developers decide is best.

John Roth
Skip Montanaro
2004-03-21 17:16:32 UTC
Permalink
Marco> def foo(x, y)[staticmethode]: pass
... vs ...
Marco> def foo(x, y) as staticmethode: pass

Marco> Define foo with arguments x and y as staticmethode.

Marco> Is Python now C++? Mabe I miss something why this syntax is wrong.

I believe the "as wrap1, wrap2, ..." form is one of the alternatives under
consideration, though current sentiment seems to be in favor of the
list-like syntax.

Python has a long history of borrowing good ideas from other languages. I'm
not aware that any of this is being borrowed from C++, no matter that it has
some syntactic similarities to stuff C++ does. Note in particular that the
construct is much more general than applying the currently available
staticmethod and classmethod builtins. The end result need not even be a
function or class (there has been some discussion about applying this
construct to classes, but it's not as obviously valuable). See recent
discussion in the python-dev archives for some other ideas.

Skip
Marco Bubke
2004-03-21 16:53:11 UTC
Permalink
Hi

I have read some mail on the dev mailing list about PEP 318 and find the new
Syntax really ugly.

def foo[staticmethode](x, y): pass

I call this foo(1, 2), this isn't really intuitive to me!

Also I don't like the brackets.

def foo(x, y)[staticmethode]: pass

are better but still I used to see [] as list or access operator and the
don't? The original Syntax looks much cleaner to me.

def foo(x, y) as staticmethode: pass

Define foo with arguments x and y as staticmethode.

Is Python now C++? Mabe I miss something why this syntax is wrong.

regards

Marco
Ronald Oussoren
2004-03-23 06:41:03 UTC
Permalink
I'll preface by saying that I haven't fully grokked all the possible
uses of decorators. Also, I can sympathize with the complaints about
seeing a function definition and then seeing a classmethod call on
that function 50 lines later (actually that seems like a case for
refactoring but never mind). b-)
The brackets and the 'as' both seem awkward to me. Also, introducing
a requirement that definition and decoration take place on the same
line seems to limit the the possibilities of decoration. Perhaps my
def __foo_method(...)
....
bar = decorator1(__foo_method)
baz = decorator2(__foo_method, arg1, arg2)
I can't imagine the exact use for this, but I can imagine that there
could be a use. If the syntax remains as it is, that is. This PEP
seems to shoot itself in the foot in this respect.
The syntax in PEP318 is syntactic sugar for the most common use of
function decorators, the current syntax would still be supported.

Ronald
Continue reading on narkive:
Loading...