From: David Abrahams (abrahams_at_[hidden])
Date: 2000-10-11 10:22:53
----- Original Message -----
From: "Ed Brey" <brey_at_[hidden]>
> You might try browsing each version of the page with lynx. For each
> version, use the print command ("p") and choose to print to a file. The
> print command saves a formatted plain-text version of the file. This
> for versions of the files that are both readable and diff-friendly. I
> it and it worked pretty well. Almost all the lines highlighted by the
> tool were due to substantive improvements in the new version. There are
> quite a few small changes, but a lot of small improvements can be just as
> valuable as a few big changes.
I was about to download lynx, but couldn't figure out what anyone would want
it for (other than this particular job). Can you motivate this tool for me?
Also, I realized that emacs' ediff decided not to auto-refine a large region
of diffs for speed reasons, but that I could use the '*' key to get much
clearer results [did I mention I love ediff?] So I think I'll be able to do
the review with the tools I have.
> Here's my two cents about the HTML changes (there's always a cost for
> advice, right? :-). Based on my intermediate-level HTML knowledge,
> changes look like a move in the right direction, from a technical point of
> view. HTML authors have gotten away with trouble for a while due to the
> prevalence of a small number of very popular browsers.
What kind of trouble? I am not being resistant here, I just hardly know
_anything_ about HTML at a deep level.
> However, this does
> eventually break down. Just a quick example: lynx rendered the section
> headings centered and in all caps in the original, because the section
> headings were tagged with <H1>. Such a strong heading was probably not
> intent, but was never caught since on GUI browsers, the difference between
> H1 and H2 is more subtle.
Ah. Maybe that's a reason to get lynx.
> To me it seems only fitting that a group committed to writing
> standard-conforming C++ would produce standard-conforming HTML.
I totally agree with the intention. But it certainly seems like it could be
a serious additional barrier to entry at boost, an idea which I don't like
at all. I hear that most of the automated tools generate garbage, by an
HTML-expert's standards, but I'm not sure I do any better by hand. I sure
wish someone else would weigh in on this; I don't know enough to speak with
> While HTML
> is not the focus, and so a much lower priority, I welcome the work of an
> experienced HTML-writer to enhance over what Microsoft's tools may
> As for maintenance, I believe that even maintainers unfamiliar with
> of HTML will find that reading valid HTML submissions can be done with
> pythonesque ease
Well, I don't think that reading std::pair<vector<int>::iterator,
bool> is ever going to be pythonesquically easy, but I take your point.
Daryle's HTML source is certainly easier to understand than the previous
version, provided you know the meaning of these fancy HTML4 tags... which I
> (as long as there's a way to easily see version
That's the crux of my concern, in case it wasn't obvious.
> since the language is pretty simple. Reading the raw HTML
> could be made even more painless if we could introduce CSS, but given that
> Microsoft's standard CSS conformance is poor and Netscape's is abysmal,
> timing just isn't right (but think of the fun we could have with
> flames and laments :-). (Of course, I've also found a non-conformance
> problem in the CSS validator promoted by the W3C that I can't seem to get
> the maintainer to fix. :-( )
You're talking over my head again.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk