[plt-scheme] 3rd-8th Grade
On Mar 18, 2006, at 2:02 PM, Brent Fulgham wrote:
>
> This thread reminds me of some discussions on the
> "Lambda-the-Ultimate" site a year or so ago; basically the discussion
> was whether "graphical" or flowcharting languages were "the way of the
> future".
>
I'm eager to find out. In the case of the robot club, I wasn't
suggesting "either graphics or text." Perhaps a few more students
would have participated in the programming, for example, if the
flowchart summarized what the program would do when transferred to the
robot. Instead, they lined up for 'fuel' from the whiz-kid without
even looking at the screen.
> As always, there was no definitive conclusion however a compelling
> argument against such languages was the simple argument that this
> fight was lost long ago in the triumph of written language over
> hieroglyphics. Why? Mainly because of extensibility. New words and
> new word groups are easier to create (and agree on) than the wholly
> new glyphs required to add a new concept to a pictographic language.
>
> Of course, linguistics is not my area of expertise, so I may be
> presenting an overly simplistic view of (IIRC was posited by our very
> own Anton van Stratten); however, I did find the argument satisfying
> at a "gut level".
>
> For example, I work with a bunch of physicists. One of them is a
> LabView expert, and prefers to write all of his instrument control and
> data analysis routines in it. To me, LabView looks like a rats-nest
> of connected wires and indistinguishable nodes performing mysterious
> tasks.
> Much of the effort of using it (from my inept fumblings) involved
> memorization of the various icons and what they meant (THIS is a loop
> counter, and THAT is a signal generator, and THAT takes user input in
> a slider, etc.). Each icon has different connection points, and
> confusing knobs and switches that are used to adjust their behavior.
This is one reason why the robots were programmed by the one whiz-kid;
he had the icons memorized. However, function names need memorized for
text programming.
> Documentation and commentary are another touch area. I find the
> text-based approach very natural for embedding comments and notation.
> The very variable and function names (and contracts) provide
> documentation. On the other hand, a network of wires and nodes
> requires pop-up help or "mouse-over" text boxes that provide a place
> to write the text. This seems like a clear indication of a missing
> element.
>
I was within a gnats-eyelash of using emacs to control and sequence the
operation of our custom laboratory equipment. That way, we could have
'taken notes', run experiments, and read the documentation in the same
environment. Weather data, for example, could have been periodically
inserted by elisp into the notes of principal investigators. It seems
more useful, in this case, to have a text-environment control the
creation of graphical ones (for images, diagnostics, controls, etc.)
than the other way around.
> For me, LabView is a non-starter. It's difficult for multiple parties
> to collaborate on software, since the "merge" functionality and source
> control tools are very specific to LabView and confusing. For the
> physicist, it is a great tool that makes perfect sense to him. He's
> able to create usable software that does what he needs. The rest of
> us use C++, Perl, Scheme, Mathematica, and other text-based tools to
> build prototypes and production software.
Believe it or not, the robot club is also affected by the collaboration
issue. If the robot software could create text programs (maybe it
can), then I could set up a telescope-like robot that they could
program to simulate looking at the moon or the space station or
whatever. They could then bring the working programs to the site and
run a big telescope. The graphical way requires extending an
environment for their robot, and it requires development on my end.
That's not going to happen. If a common language were used by both
ends, the problem is reduced to an interface issue.
>
>>> That's probably not the case, but I don't see any reason why "words"
>>> have to be part of the programming process. Some friends of mine
>>> went through a CS Master's degree program at Carnegie Mellon where
>>> they did significant amounts of drag-and-drop programming, so
>>> clearly the concept, if not the Lego implementation, is ready for
>>> prime-time.
>> The robot meeting was about making systems work. They had to make
>> sure that bars didn't fall off and rubber tracks didn't jamb and that
>> the programs worked ok. Programming was only a part of it, just like
>> real life. I wonder if it is ok to imply that systems cannot be
>> instructed by words, when they clearly can be.
>
> At root, I think focusing on text versus drag-and-drop programming is
> missing the point. Structuring syntactically correct statements has
> never been the hardest part of programming; rather, it is the logic
> and structure of the system that is difficult to get right. Tools
> like UML architecting tools and LabView paper over the design process
> with glossy graphics, but I don't think there is any compelling
> evidence that they actually solve the "hard problem". Certainly, I
> have not observed the use of UML to magically force engineers to
> create good designs. Rather, it provides a useful way to visualize
> the system at different levels of detail.
>
> In closing, I ask you to consider these two equivalent programs to
> compute the sum of five random numbers:
>
> http://ftp.rasip.fer.hr/research/labview/example5.html
>
> Compare that with a simple Scheme implementation:
>
> (require (lib "27.s" "srfi"))
>
> (define (accum count)
> (letrec ((accum-internal
> (lambda (x y)
> (if (eq? y 0)
> x
> (accum-internal (+ x (random-integer 100)) (-
> y 1))))))
> (accum-internal 0 count)))
My colleagues would not be convinced.
rac