[plt-scheme] Re: HTDP - evidently not for everyone.

From: wooks (wookiz at hotmail.com)
Date: Thu Feb 11 01:49:57 EST 2010


On Feb 10, 6:22 pm, Carl Eastlund <carl.eastl... at gmail.com> wrote:
> On Wed, Feb 10, 2010 at 1:05 PM, wooks <woo... at hotmail.com> wrote:
> > On Feb 10, 1:46 pm, Shriram Krishnamurthi <s... at cs.brown.edu> wrote:
>
> >> And for that matter, study has shown that some who "can't do X" have a
> >> much subtler problem: for instance, dyslexia.  Should we have
> >> administered a "does this spelling look right?" test before school and
> >> kept them out of reading classes?  (And what if there's a similar
> >> phenomenon here?)
>
> > I don't think it is fair to extrapolate the argument to any X.
> > The argument is that programming ability is a naturally occurring
> > attribute that humans either possess or don't, hence the appropriate
> > comparison would be with some other naturally occurring human
> > attribute rather than one, like spelling or reading that are skills
> > generally acquired through teaching.
>
> It takes a lot more than one study to determine (a) that there is such
> a thing as "programming ability", (b) that it is a measurable
> quantity, (c) that we know how to reliably measure it, and (d) that it
> is a fixed quantity that cannot be improved by ANY means (not just the
> means tried so far).
>

I'd just like to set your "objections" (can I call them that) against
what I think the paper actually claims.

>From the Abstract
"We have found a test for programming aptitude, of which we give
details. We can predict success or failure even before students have
had any contact with any programming language with very high accuracy,
and by testing with the same instrument after a few weeks of exposure,
with extreme accuracy. We present experimental evidence to support our
claim. We point out that programming teaching is useless for those who
are bound to fail and pointless for those who are certain to succeed.

The Papers Conclusion
There is a test for programming aptitude, or at least for success in a
first programming course. We have speculated on the reasons for its
success, but in truth we don’t understand how it works any
more than you do. An enormous space of new problems has opened up
before us all.

They claim an ability to predict pass/fail for an introductory
programming course. That is way short of  claiming to have created a
reliable calibration mechanism, which seems to be the criteria in your
assessment.

> The idea that "programming" is a more "naturally occurring human
> attribute" than "reading"... I don't think either of those abilities
> are candidates for that title.
>

Well thats just my attempt at suggesting a  fairer benchmark than the
one I saw being used.

> > I'm sure there are better examples but the one that springs to mind is
> > perhaps riding a bike. If that doesn't fly then I appeal to personal
> > experience with my attempt to learing to water ski (completely
> > futile).
>
> This is your judgment, though.  Not the water ski instructor's, the
> kindergarten teacher's, computer science professor's.  It is not the
> teacher's job to give up on you.  It is your decision whether or not
> to continue making the attempt.
>

Sorry but I think thats a bit evangelical and emotive. Nobody has said
"don't teach these people". All that has been claimed is that  no
amount of teaching will change the pass/fail outcome of an
introductory programming course and that outcome can be predicted.

My students have been able to write short programs that do things with
pictures. With the benefit of hindsight I attribute their success at
this to the expressiveness of the feedback they obtain from their
efforts rather than the design recipe. Since moving on from pictures
they have shown a consistent inability to navigate through all the
steps of the design recipe by themselves. They understand what they
are shown but

a) they can't do it by themselves - things invariably break down at
the templating stage.
b) they can't abstract/analogise from what they have been shown (and
understood) to solve even a slightest variation of the problem.

Some examples of b)

Write member-John to determine whether John features in a list of
names. So we do the whole kaboodle in class - test cases, the list
template the lot. We do it and they get it. Ask them to now write
member-x where the name to search for is an supplied as an argument -
they are lost.

We do how to sum a list. We show how it's the same list template, give
them a list and they understand how to use first and rest (and
combinations thereof) to get at list elements.  We even create a human
list - pick out 5 people in class allocate them a number and say ok
here's your number tell the next person in the list to give you his
sum then add your number to this. They understand that the next person
is summing the rest of the list. They understand that this result is
being handed to the person at the head of the list (first my-list).
They get that. They understand how to combine these values with plus
and how and why that is the right answer. They get this.

Then you say ok lets do count-members-in-list. Same list, same
template, same test cases (albeit with the expected results adjusted).
They understand all that, but they cannot complete the task by
themselves.






Posted on the users mailing list.