<div dir="ltr"><div>Please correct me if I'm way off base, but the whole issue of a purely functional language being impractical because it should not "go outside" the function (e.g. for IO) seems to hinge on the fact that there is (today) a segmented, discrete, world out there where data and apis etc. reside outside of your box running your program. What if you had limitless live memory and parallelism? Isn't the vaunted Google File System just this sort of "One Memory/Massively Parallel" beast? If your purely functional program were running in such a One Memory/massively parallel space, you could have some Ueber-function encompassing the data, within which all your subsequent functions could play. You could do anything and everything totally purely functionally. Right? So with Haskell, everything is fine, functionally pure -- until you have to go outside to interact with stuff outside your program. So don't go "outside." Right?</div>
<div><br></div><div>I'm coming at this from an old Cartography/GIS angle. Many years ago I heard of a project to write GIS software in (then new) Smalltalk. But it was deemed not feasible because the model was expected to bring into live memory an entire mapping project with all its geographic classes instantiated and ready to go . . . then you would do your work. But of course today, even on home machines, we measure memory in gigabytes.</div>
<div><br></div><div>Obviously, today's computing world is about dealing with lots and lots of discrete things: machines, datasets, apis, etc. So, my specific question is: Isn't purely functional really just waiting for One Memory/Massive Parallel wherein all its supposed foibles are moot? The whole "sort-of" functional world simply takes all the discretism for granted. But Isn't the purely functional paradigm driving us toward a day when some sort of OM/MP (virtual or real) is the rule?</div>
<div><br></div><div>LB</div><div>Grand Marais, MN</div></div>