[plt-scheme] 3rd-8th Grade
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
>> I'll grant that I don't know what the drag/drop structure is, but
>> I'm certain I could design just as good a visual programming
>> language as there are text ones, and
>> just as bad a text language as there could be visual ones. Who
>> says there even are words behind the movements?
> I do. Societies thrive because there are words. If a team wanted
> their robot to go a little farther past the outer edge of the
> circle (a light detector sees it), they told the whiz-kid (with
> words; they did not drag and drop the whiz kid) what they wanted.
> That's what made me wonder: one whiz-kid did the dragging and
> dropping for all 9 teams because he knew how to do the dragging and
> dropping... the rest of them knew how to do the speaking. What if
> the machine could listen?
This thread reminds me of some discussions on the "Lambda-the-
Ultimate" site a year or so ago; basically the discussion was whether
"graphical" or flowcharting languages were "the way of the future".
As always, there was no definitive conclusion however a compelling
argument against such languages was the simple argument that this
fight was lost long ago in the triumph of written language over
hieroglyphics. Why? Mainly because of extensibility. New words and
new word groups are easier to create (and agree on) than the wholly
new glyphs required to add a new concept to a pictographic language.
Of course, linguistics is not my area of expertise, so I may be
presenting an overly simplistic view of (IIRC was posited by our very
own Anton van Stratten); however, I did find the argument satisfying
at a "gut level".
For example, I work with a bunch of physicists. One of them is a
LabView expert, and prefers to write all of his instrument control
and data analysis routines in it. To me, LabView looks like a rats-
nest of connected wires and indistinguishable nodes performing
mysterious tasks. Much of the effort of using it (from my inept
fumblings) involved memorization of the various icons and what they
meant (THIS is a loop counter, and THAT is a signal generator, and
THAT takes user input in a slider, etc.). Each icon has different
connection points, and confusing knobs and switches that are used to
adjust their behavior.
Documentation and commentary are another touch area. I find the text-
based approach very natural for embedding comments and notation. The
very variable and function names (and contracts) provide
documentation. On the other hand, a network of wires and nodes
requires pop-up help or "mouse-over" text boxes that provide a place
to write the text. This seems like a clear indication of a missing
element.
For me, LabView is a non-starter. It's difficult for multiple
parties to collaborate on software, since the "merge" functionality
and source control tools are very specific to LabView and confusing.
For the physicist, it is a great tool that makes perfect sense to
him. He's able to create usable software that does what he needs.
The rest of us use C++, Perl, Scheme, Mathematica, and other text-
based tools to build prototypes and production software.
>> That's probably not the case, but I don't see any reason why
>> "words" have to be part of the programming process. Some friends
>> of mine went through a CS Master's degree program at Carnegie
>> Mellon where they did significant amounts of drag-and-drop
>> programming, so clearly the concept, if not the Lego
>> implementation, is ready for prime-time.
> The robot meeting was about making systems work. They had to make
> sure that bars didn't fall off and rubber tracks didn't jamb and
> that the programs worked ok. Programming was only a part of it,
> just like real life. I wonder if it is ok to imply that systems
> cannot be instructed by words, when they clearly can be.
At root, I think focusing on text versus drag-and-drop programming is
missing the point. Structuring syntactically correct statements has
never been the hardest part of programming; rather, it is the logic
and structure of the system that is difficult to get right. Tools
like UML architecting tools and LabView paper over the design process
with glossy graphics, but I don't think there is any compelling
evidence that they actually solve the "hard problem". Certainly, I
have not observed the use of UML to magically force engineers to
create good designs. Rather, it provides a useful way to visualize
the system at different levels of detail.
In closing, I ask you to consider these two equivalent programs to
compute the sum of five random numbers:
http://ftp.rasip.fer.hr/research/labview/example5.html
Compare that with a simple Scheme implementation:
(require (lib "27.s" "srfi"))
(define (accum count)
(letrec ((accum-internal
(lambda (x y)
(if (eq? y 0)
x
(accum-internal (+ x (random-integer 100))
(- y 1))))))
(accum-internal 0 count)))
- -Brent
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2.2 (Darwin)
iD8DBQFEHHVwzGDdrzfvUpURAoI1AJ9RR6ZWlaJWCrljtm7+yklvWLY6XwCgjQRN
lPE833p4mX6O4qP0AenN4h8=
=tduc
-----END PGP SIGNATURE-----