Hi Mark,
On Mon, 24 Dec 2018 13:12:23 -0500Mark H Weaver <mhw@netris.org> wrote:
Toggle quote (4 lines)
> Of course, the usual reason to choose UTF-32 is to support non-ASCII> characters while retaining fixed-width code points, so that string> lookups are straightforward and efficient.
This kind of lookup is almost never what is necessary. There are manypeople who assume character is the same as codepoint and to those peopleUTF-32 brings something to the table, but it's really not useful if peopledo text processing correctly, see below.
(Of course whether packages actually do this remains to be seen)
Toggle quote (3 lines)
> Using UTF-8 improves space efficiency, but at the cost of extra code>complexity.
I agree.
Toggle quote (4 lines)
> That extra> complexity is what I guess we would need to add to each program that> currently uses UTF-32.
Yes, but they usually have to do stream processing even with UTF-32 (becausea character can be composed of possibly infinite number of codepoints),so the infrastructure should be already there and the effort should beminimal.
Toggle quote (5 lines)
> Alternatively, we could extend the on-disk> format to support UTF-8 and then add some kind of "load hook" that> converts the string to UTF-32 at load time. Either way, it's likely to> be a can of worms.
If it ever came to that, a pluggable reference scanner would be preferrable. But really, it would irk me to have so much complexityin something so basic (the reference scanner) for no end-user gain(as a distribution we could just mandate UTF-8 for references and theproblem would be gone for the user with no loss of functionality).
It's always easy to add special cases - but more code means more bugsand I think if possible it's best to have only the simple case implementedin the core - because it's less complicated which means more likelyto be correct (for the case it does handle). In the end it depends onwhat would be more code, and more widely used.
Also, if we wanted to debug reference errors, we couldn't use grep anymorebecause it can't handle utf-32 either (neither can any of the other UNIX tools).
Also, I really don't want to return to the time where I had to call iconvonce every three commands to be able to do anything useful on UNIX.
Also, the build daemon is written in C++ and C++ strings are widelyknown to have very very bad codepoint awareness (to say nothing aboutthe horrible conversion facilities).
Also, if both UTF-32 and UTF-8 are used on disk, care needs to not misdetectan UTF-8 sequence as an UTF-32 sequence of different text - or the other wayaround -, but that's unlikely for ASCII strings.
Toggle quote (5 lines)
> I really think it would be a mistake to try to force every program and> language implementation to use our preferred string representation. I> suspect it would be vastly easier to compromise and support a few other> popular string representations in Guix, namely UTF-16 and UTF-32.
In 1992, UTF-8 was invented. Subsequently, most of the Internet,all new GNU Linux distributions etc, all UNIX GUI frameworks, Subversionetc standardized on UTF-8, with the eventual goal of standardizing allnetwork transfer and storage to UTF-8. I think that by now the outliersare the ones who need to change, otherwise these senseless encodingconversions will never cease. It's not like different encodings allow forbetter expression of writings or anything useful to the end user.
As a distribution we can't force upstream to change, but just filingbug reports upstream would make us see where they stand on this.
Toggle quote (9 lines)
> If you don't want to change the daemon, it could be worked around in our> build-side code as follows: we could add a new phase to certain build> systems (or possibly gnu-build-system) that scans each output for> UTF-16/32 encoded store references that are never referenced in UTF-8.> If such references exist, a file with an unobtrusive name would be added> to that output containing those references encoded in UTF-8. This would> enable our daemon's existing reference scanner to find all of the> references.
I agree that that would be nice. As a first step, even just detectingproblems like that and erroring out would be okay - in order to find themin the first place. Right now, it's difficult to detect and so also difficultto say how wide-spread the problem is. If the problem is wide-spread enoughmy tune could change very quickly.
What you propose is similar to what I did in Java in Guix, only it givesus even more advantages in the Java case (faster class loading andeventual non-propagated inputs).